Jan 05 20:05:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 05 20:05:12 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:12 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 20:05:13 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 05 20:05:13 crc kubenswrapper[4754]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.412345 4754 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415245 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415260 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415265 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415269 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415273 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415277 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415281 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415300 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415305 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415309 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415314 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415317 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415322 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415328 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415332 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415338 4754 feature_gate.go:330] unrecognized feature gate: Example Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415342 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415347 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415352 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415357 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415361 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415366 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415370 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415374 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415377 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415381 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415384 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415388 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415391 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415395 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415399 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415404 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415407 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415411 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415415 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415419 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415422 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415425 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415429 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415432 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415436 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415439 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415443 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415446 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415449 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415454 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415458 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415461 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415466 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415470 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415474 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415478 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415482 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415487 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415492 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415496 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415500 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415504 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415508 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415511 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415515 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415518 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415522 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415525 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415529 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415546 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415551 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415556 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415560 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415564 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.415568 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415638 4754 flags.go:64] FLAG: --address="0.0.0.0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415647 4754 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415653 4754 flags.go:64] FLAG: --anonymous-auth="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415658 4754 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415664 4754 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415668 4754 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415674 4754 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415679 4754 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415683 4754 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415687 4754 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415692 4754 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415696 4754 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415700 4754 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415704 4754 flags.go:64] FLAG: --cgroup-root="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415709 4754 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415713 4754 flags.go:64] FLAG: --client-ca-file="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415717 4754 flags.go:64] FLAG: --cloud-config="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415721 4754 flags.go:64] FLAG: --cloud-provider="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415725 4754 flags.go:64] FLAG: --cluster-dns="[]" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415731 4754 flags.go:64] FLAG: --cluster-domain="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415742 4754 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415746 4754 flags.go:64] FLAG: --config-dir="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415750 4754 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415755 4754 flags.go:64] FLAG: --container-log-max-files="5" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415783 4754 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415788 4754 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415792 4754 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415797 4754 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415801 4754 flags.go:64] FLAG: --contention-profiling="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415806 4754 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415811 4754 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415815 4754 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415819 4754 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415824 4754 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415828 4754 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415832 4754 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415837 4754 flags.go:64] FLAG: --enable-load-reader="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415841 4754 flags.go:64] FLAG: --enable-server="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415844 4754 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415849 4754 flags.go:64] FLAG: --event-burst="100" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415853 4754 flags.go:64] FLAG: --event-qps="50" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415857 4754 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415861 4754 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415865 4754 flags.go:64] FLAG: --eviction-hard="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415870 4754 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415874 4754 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415878 4754 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415883 4754 flags.go:64] FLAG: --eviction-soft="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415887 4754 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415891 4754 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415895 4754 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415899 4754 flags.go:64] FLAG: --experimental-mounter-path="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415903 4754 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415907 4754 flags.go:64] FLAG: --fail-swap-on="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415911 4754 flags.go:64] FLAG: --feature-gates="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415916 4754 flags.go:64] FLAG: --file-check-frequency="20s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415920 4754 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415924 4754 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415929 4754 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415933 4754 flags.go:64] FLAG: --healthz-port="10248" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415937 4754 flags.go:64] FLAG: --help="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415941 4754 flags.go:64] FLAG: --hostname-override="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415945 4754 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415949 4754 flags.go:64] FLAG: --http-check-frequency="20s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415953 4754 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415958 4754 flags.go:64] FLAG: --image-credential-provider-config="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415962 4754 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415966 4754 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415971 4754 flags.go:64] FLAG: --image-service-endpoint="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415974 4754 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415978 4754 flags.go:64] FLAG: --kube-api-burst="100" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415982 4754 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415986 4754 flags.go:64] FLAG: --kube-api-qps="50" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415990 4754 flags.go:64] FLAG: --kube-reserved="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415995 4754 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.415999 4754 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416003 4754 flags.go:64] FLAG: --kubelet-cgroups="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416006 4754 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416011 4754 flags.go:64] FLAG: --lock-file="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416015 4754 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416019 4754 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416023 4754 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416029 4754 flags.go:64] FLAG: --log-json-split-stream="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416033 4754 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416037 4754 flags.go:64] FLAG: --log-text-split-stream="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416041 4754 flags.go:64] FLAG: --logging-format="text" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416045 4754 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416050 4754 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416054 4754 flags.go:64] FLAG: --manifest-url="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416058 4754 flags.go:64] FLAG: --manifest-url-header="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416063 4754 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416068 4754 flags.go:64] FLAG: --max-open-files="1000000" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416072 4754 flags.go:64] FLAG: --max-pods="110" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416076 4754 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416081 4754 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416084 4754 flags.go:64] FLAG: --memory-manager-policy="None" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416088 4754 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416092 4754 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416096 4754 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416101 4754 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416109 4754 flags.go:64] FLAG: --node-status-max-images="50" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416113 4754 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416117 4754 flags.go:64] FLAG: --oom-score-adj="-999" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416121 4754 flags.go:64] FLAG: --pod-cidr="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416125 4754 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416132 4754 flags.go:64] FLAG: --pod-manifest-path="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416136 4754 flags.go:64] FLAG: --pod-max-pids="-1" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416140 4754 flags.go:64] FLAG: --pods-per-core="0" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416144 4754 flags.go:64] FLAG: --port="10250" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416148 4754 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416152 4754 flags.go:64] FLAG: --provider-id="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416156 4754 flags.go:64] FLAG: --qos-reserved="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416160 4754 flags.go:64] FLAG: --read-only-port="10255" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416164 4754 flags.go:64] FLAG: --register-node="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416168 4754 flags.go:64] FLAG: --register-schedulable="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416172 4754 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416180 4754 flags.go:64] FLAG: --registry-burst="10" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416184 4754 flags.go:64] FLAG: --registry-qps="5" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416188 4754 flags.go:64] FLAG: --reserved-cpus="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416192 4754 flags.go:64] FLAG: --reserved-memory="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416197 4754 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416201 4754 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416205 4754 flags.go:64] FLAG: --rotate-certificates="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416209 4754 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416212 4754 flags.go:64] FLAG: --runonce="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416216 4754 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416220 4754 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416224 4754 flags.go:64] FLAG: --seccomp-default="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416228 4754 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416232 4754 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416236 4754 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416240 4754 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416244 4754 flags.go:64] FLAG: --storage-driver-password="root" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416248 4754 flags.go:64] FLAG: --storage-driver-secure="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416252 4754 flags.go:64] FLAG: --storage-driver-table="stats" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416256 4754 flags.go:64] FLAG: --storage-driver-user="root" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416260 4754 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416265 4754 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416269 4754 flags.go:64] FLAG: --system-cgroups="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416273 4754 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416279 4754 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416283 4754 flags.go:64] FLAG: --tls-cert-file="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416301 4754 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416306 4754 flags.go:64] FLAG: --tls-min-version="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416310 4754 flags.go:64] FLAG: --tls-private-key-file="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416314 4754 flags.go:64] FLAG: --topology-manager-policy="none" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416318 4754 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416322 4754 flags.go:64] FLAG: --topology-manager-scope="container" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416327 4754 flags.go:64] FLAG: --v="2" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416333 4754 flags.go:64] FLAG: --version="false" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416339 4754 flags.go:64] FLAG: --vmodule="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416343 4754 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416348 4754 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416440 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416445 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416448 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416452 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416456 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416459 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416463 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416466 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416470 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416473 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416477 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416482 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416487 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416491 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416495 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416499 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416503 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416507 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416510 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416514 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416518 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416522 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416532 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416535 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416539 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416542 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416546 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416549 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416553 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416556 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416560 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416563 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416566 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416570 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416573 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416576 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416580 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416583 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416587 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416590 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416594 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416597 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416601 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416606 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416609 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416613 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416616 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416620 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416623 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416626 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416630 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416633 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416637 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416640 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416646 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416650 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416654 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416658 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416662 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416665 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416670 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416674 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416678 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416682 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416686 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416690 4754 feature_gate.go:330] unrecognized feature gate: Example Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416693 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416697 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416700 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416704 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.416707 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.416719 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.430346 4754 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.430429 4754 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430591 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430612 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430624 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430635 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430646 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430656 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430669 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430680 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430690 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430700 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430710 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430722 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430733 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430745 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430754 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430764 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430774 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430785 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430795 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430806 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430815 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430825 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430837 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430848 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430858 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430868 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430877 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430893 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430909 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430923 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430934 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430945 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430957 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430968 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430979 4754 feature_gate.go:330] unrecognized feature gate: Example Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.430991 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431003 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431013 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431023 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431034 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431045 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431055 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431065 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431076 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431087 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431097 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431107 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431120 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431131 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431144 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431157 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431169 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431181 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431191 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431202 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431212 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431223 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431234 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431248 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431264 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431278 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431288 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431334 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431346 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431357 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431367 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431378 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431389 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431398 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431410 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431421 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.431438 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431909 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431928 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431941 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431951 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431962 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431974 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431985 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.431995 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432005 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432019 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432031 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432046 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432059 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432070 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432081 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432091 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432102 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432113 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432124 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432133 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432144 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432154 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432164 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432174 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432184 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432193 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432203 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432213 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432223 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432233 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432242 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432256 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432269 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432282 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432327 4754 feature_gate.go:330] unrecognized feature gate: Example Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432338 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432348 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432357 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432369 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432379 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432388 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432398 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432409 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432418 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432428 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432438 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432448 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432460 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432471 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432481 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432490 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432500 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432511 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432521 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432531 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432541 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432550 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432562 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432572 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432582 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432592 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432602 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432611 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432624 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432635 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432646 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432655 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432666 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432676 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432686 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.432696 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.432712 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.433400 4754 server.go:940] "Client rotation is on, will bootstrap in background" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.440038 4754 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.440209 4754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.441432 4754 server.go:997] "Starting client certificate rotation" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.441474 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.442231 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 00:43:12.787865812 +0000 UTC Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.442556 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.448774 4754 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.450773 4754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.452737 4754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.464327 4754 log.go:25] "Validated CRI v1 runtime API" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.481166 4754 log.go:25] "Validated CRI v1 image API" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.483360 4754 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.485753 4754 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-05-20-00-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.485791 4754 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.505326 4754 manager.go:217] Machine: {Timestamp:2026-01-05 20:05:13.503689454 +0000 UTC m=+0.212873368 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d80f8dc7-2091-41c3-9fa5-371478c52560 BootID:4483e6cc-d144-4eb0-8ee8-46170e462003 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c8:e1:1c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c8:e1:1c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bf:d6:12 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:dd:fb:49 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:62:48:64 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9f:85:fc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:ab:f2:2f:15:4a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:14:43:aa:3f:88 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.505713 4754 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.506039 4754 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.507563 4754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.508004 4754 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.508094 4754 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.508590 4754 topology_manager.go:138] "Creating topology manager with none policy" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.508618 4754 container_manager_linux.go:303] "Creating device plugin manager" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.508967 4754 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.509048 4754 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.509496 4754 state_mem.go:36] "Initialized new in-memory state store" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.509682 4754 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.510762 4754 kubelet.go:418] "Attempting to sync node with API server" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.510811 4754 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.510867 4754 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.510900 4754 kubelet.go:324] "Adding apiserver pod source" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.510925 4754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.513532 4754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.513609 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.513758 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.513912 4754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.514037 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.514146 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.514571 4754 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515109 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515131 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515138 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515144 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515156 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515164 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515170 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515181 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515190 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515198 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515208 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515216 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515410 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.515948 4754 server.go:1280] "Started kubelet" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.516575 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.516558 4754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.516560 4754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.517596 4754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 05 20:05:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.518281 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1887ee6900a2f329 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 20:05:13.515864873 +0000 UTC m=+0.225048747,LastTimestamp:2026-01-05 20:05:13.515864873 +0000 UTC m=+0.225048747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.520731 4754 server.go:460] "Adding debug handlers to kubelet server" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.521053 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.521099 4754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.521159 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:54:41.229070663 +0000 UTC Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.522074 4754 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.522096 4754 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.522253 4754 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.523467 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.524185 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.523427 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.523778 4754 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.523925 4754 factory.go:55] Registering systemd factory Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.525977 4754 factory.go:221] Registration of the systemd container factory successfully Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.527715 4754 factory.go:153] Registering CRI-O factory Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.527858 4754 factory.go:221] Registration of the crio container factory successfully Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.528008 4754 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.528063 4754 factory.go:103] Registering Raw factory Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.528115 4754 manager.go:1196] Started watching for new ooms in manager Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.529576 4754 manager.go:319] Starting recovery of all containers Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.541822 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542038 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542074 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542114 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542141 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542160 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542181 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542209 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542243 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542268 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542326 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542381 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542403 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542429 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542449 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542469 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542492 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542521 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542541 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542561 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542582 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542601 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542622 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542687 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542707 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542729 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542752 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542774 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542800 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.542824 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543096 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543120 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543142 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543161 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543180 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543200 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543218 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543237 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543258 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543279 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543341 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543371 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543394 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543412 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543431 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543449 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543468 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543487 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543508 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543529 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543550 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543569 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543593 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543613 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543635 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543653 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543674 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543693 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543711 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543731 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543750 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543768 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543787 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543806 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543825 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543843 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543861 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543879 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543897 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543916 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.543935 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544141 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544161 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544227 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544249 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544267 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544317 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544338 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544355 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544378 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544399 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544419 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544438 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544456 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544476 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544496 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544516 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544536 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544557 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544580 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544599 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544620 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544648 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544674 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544700 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544729 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544756 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544776 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544800 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544818 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544838 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544857 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544876 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544928 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544956 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544978 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.544998 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545021 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545053 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545081 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545108 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545137 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545171 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545197 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545224 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545249 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545273 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545336 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545364 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545393 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545421 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545447 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545471 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545495 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545519 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545545 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545574 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545597 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545624 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545649 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545674 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545697 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545717 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545785 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545810 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545833 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545858 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545881 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545911 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545939 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545969 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.545994 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546018 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546042 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546070 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546093 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546120 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546146 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546171 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546197 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546220 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546248 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546272 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546332 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546382 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546409 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546433 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546462 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546493 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546515 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546534 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546554 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546571 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546588 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546608 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546628 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546647 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546664 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546688 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546708 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546727 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546747 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546766 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546783 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546800 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546818 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546839 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546856 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546876 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546896 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546914 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546932 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546950 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546973 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.546994 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.547905 4754 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.547944 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.547966 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.547986 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548004 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548023 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548044 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548074 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548092 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548109 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548128 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548146 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548165 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548185 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548204 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548229 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548264 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548322 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548347 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548372 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548392 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548410 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548429 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548453 4754 reconstruct.go:97] "Volume reconstruction finished" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.548466 4754 reconciler.go:26] "Reconciler: start to sync state" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.561894 4754 manager.go:324] Recovery completed Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.573592 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.576192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.576226 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.576236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.577034 4754 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.577050 4754 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.577077 4754 state_mem.go:36] "Initialized new in-memory state store" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.583738 4754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.587064 4754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.587141 4754 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.587176 4754 kubelet.go:2335] "Starting kubelet main sync loop" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.587250 4754 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 05 20:05:13 crc kubenswrapper[4754]: W0105 20:05:13.588689 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.588777 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.616592 4754 policy_none.go:49] "None policy: Start" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.619132 4754 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.619182 4754 state_mem.go:35] "Initializing new in-memory state store" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.625734 4754 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.674235 4754 manager.go:334] "Starting Device Plugin manager" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.674345 4754 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.674360 4754 server.go:79] "Starting device plugin registration server" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.674791 4754 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.674810 4754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.675250 4754 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.675539 4754 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.675642 4754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.687927 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.688107 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689380 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689417 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689427 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689558 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689838 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.689888 4754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.689896 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690334 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690349 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690424 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690678 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.690757 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691151 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691357 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691823 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691846 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.691918 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692781 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692963 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692975 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.692987 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693187 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693513 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693629 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693834 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693844 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693960 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.693980 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.694901 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.694981 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.694947 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.695060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.695071 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.695035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.725210 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.749977 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750092 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750206 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750399 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750436 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750528 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750590 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750614 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750636 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750661 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750683 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750704 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.750722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.775095 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.776482 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.776525 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.776537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.776568 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.777085 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.851654 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.851895 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852095 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852210 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852262 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852314 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852329 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852332 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852386 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852401 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852410 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852432 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852474 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852516 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852503 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852534 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852497 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852562 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852435 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852580 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.852756 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.978195 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.979679 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.979714 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.979726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:13 crc kubenswrapper[4754]: I0105 20:05:13.979750 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:13 crc kubenswrapper[4754]: E0105 20:05:13.980196 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.039390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.049101 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.070668 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.078066 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8ebfe45ac5180daba536aaa0675e8bf112f8deecc89e728e3bf452201c2ef5d7 WatchSource:0}: Error finding container 8ebfe45ac5180daba536aaa0675e8bf112f8deecc89e728e3bf452201c2ef5d7: Status 404 returned error can't find the container with id 8ebfe45ac5180daba536aaa0675e8bf112f8deecc89e728e3bf452201c2ef5d7 Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.079414 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.081815 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e83941b0fafe124d040f3d7fdeafb2b428fc0ea46a2d8dabd53e4635a3a3fcba WatchSource:0}: Error finding container e83941b0fafe124d040f3d7fdeafb2b428fc0ea46a2d8dabd53e4635a3a3fcba: Status 404 returned error can't find the container with id e83941b0fafe124d040f3d7fdeafb2b428fc0ea46a2d8dabd53e4635a3a3fcba Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.084466 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.094731 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1c1f9cfd58b9c34b3833427b6f5eace5214798d0b5190cee693d1c05c3803a94 WatchSource:0}: Error finding container 1c1f9cfd58b9c34b3833427b6f5eace5214798d0b5190cee693d1c05c3803a94: Status 404 returned error can't find the container with id 1c1f9cfd58b9c34b3833427b6f5eace5214798d0b5190cee693d1c05c3803a94 Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.109377 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-16ecb6b1ab54d8eaaf9fd35a3be80b047bcd002c20039a062f7aed9bf45bf7e6 WatchSource:0}: Error finding container 16ecb6b1ab54d8eaaf9fd35a3be80b047bcd002c20039a062f7aed9bf45bf7e6: Status 404 returned error can't find the container with id 16ecb6b1ab54d8eaaf9fd35a3be80b047bcd002c20039a062f7aed9bf45bf7e6 Jan 05 20:05:14 crc kubenswrapper[4754]: E0105 20:05:14.126681 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.274579 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6b70bfa620cb306ab30e1e1cc1ad963f8f087f01397a69ab5bda326359873559 WatchSource:0}: Error finding container 6b70bfa620cb306ab30e1e1cc1ad963f8f087f01397a69ab5bda326359873559: Status 404 returned error can't find the container with id 6b70bfa620cb306ab30e1e1cc1ad963f8f087f01397a69ab5bda326359873559 Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.348564 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:14 crc kubenswrapper[4754]: E0105 20:05:14.348662 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.380771 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.382657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.382693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.382705 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.382734 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:14 crc kubenswrapper[4754]: E0105 20:05:14.383123 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.517503 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.521592 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:22:38.196653985 +0000 UTC Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.593166 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b70bfa620cb306ab30e1e1cc1ad963f8f087f01397a69ab5bda326359873559"} Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.595172 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16ecb6b1ab54d8eaaf9fd35a3be80b047bcd002c20039a062f7aed9bf45bf7e6"} Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.596124 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1c1f9cfd58b9c34b3833427b6f5eace5214798d0b5190cee693d1c05c3803a94"} Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.596940 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8ebfe45ac5180daba536aaa0675e8bf112f8deecc89e728e3bf452201c2ef5d7"} Jan 05 20:05:14 crc kubenswrapper[4754]: I0105 20:05:14.597993 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e83941b0fafe124d040f3d7fdeafb2b428fc0ea46a2d8dabd53e4635a3a3fcba"} Jan 05 20:05:14 crc kubenswrapper[4754]: W0105 20:05:14.910480 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:14 crc kubenswrapper[4754]: E0105 20:05:14.910618 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:14 crc kubenswrapper[4754]: E0105 20:05:14.928888 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Jan 05 20:05:15 crc kubenswrapper[4754]: W0105 20:05:15.046090 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:15 crc kubenswrapper[4754]: E0105 20:05:15.046200 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:15 crc kubenswrapper[4754]: W0105 20:05:15.046630 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:15 crc kubenswrapper[4754]: E0105 20:05:15.046678 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.184088 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.185945 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.186043 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.186062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.186108 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:15 crc kubenswrapper[4754]: E0105 20:05:15.186680 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.518910 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.522086 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:46:06.476381067 +0000 UTC Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.601391 4754 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e93ddefdc2ae79ac0734f5241554b3446c7b0236bfc061df66fa51f613326c5c" exitCode=0 Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.601457 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e93ddefdc2ae79ac0734f5241554b3446c7b0236bfc061df66fa51f613326c5c"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.601562 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.602958 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.602993 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.603004 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.607729 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908" exitCode=0 Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.607802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.607942 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.612635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.612826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.612927 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.613920 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.613972 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.613987 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.615210 4754 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="56090c19540b42d062192339ae5d8b024b580cf8111800623ca5e1da47108b1f" exitCode=0 Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.615273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"56090c19540b42d062192339ae5d8b024b580cf8111800623ca5e1da47108b1f"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.615417 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.616711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.616760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.616780 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.616926 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618636 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3597cc8d0df3180c234edb8dc9a8469fff884940751cff4992b3fe91e31a1c01" exitCode=0 Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3597cc8d0df3180c234edb8dc9a8469fff884940751cff4992b3fe91e31a1c01"} Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618733 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618824 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.618870 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.620597 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.620638 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.620656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:15 crc kubenswrapper[4754]: I0105 20:05:15.633586 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 20:05:15 crc kubenswrapper[4754]: E0105 20:05:15.637700 4754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:16 crc kubenswrapper[4754]: W0105 20:05:16.182169 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Jan 05 20:05:16 crc kubenswrapper[4754]: E0105 20:05:16.182284 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.523035 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:29:41.232521389 +0000 UTC Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.636524 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.636667 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.636685 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.636697 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.641796 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.641874 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.643526 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.643569 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.643584 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.645125 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8d7b4b8676b2bffe7dab97957edff8d1b43d1ceec24dacf5fd3290454676ef80"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.645170 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.646587 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.646628 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.646658 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.648440 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6fd9ae087ff3fb2eb2530e47253004142ada4dfaec7b56b7c3161ec85700fe3f" exitCode=0 Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.648537 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6fd9ae087ff3fb2eb2530e47253004142ada4dfaec7b56b7c3161ec85700fe3f"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.648557 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.649742 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.649768 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.649778 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.654522 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c10616861f2ba929b61240a37982e2cc54898ef9dac35fafcf9411cd9f83be4f"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.654550 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5b33f98972605c5d6d1e6bf41ddab69c5293f29b5502193a725d58a21acc4e53"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.654565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4f17c42519323db5df01fa3506c1000c70bfde5e5559eab9fd6aceef4f692c21"} Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.654678 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.655941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.655974 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.655988 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.787444 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.790025 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.790113 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.790139 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:16 crc kubenswrapper[4754]: I0105 20:05:16.790189 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.285441 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.524399 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:39:49.546616921 +0000 UTC Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.661977 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e8cd7651946b5e61feead5aa4780e3224ab07431ed710fea897d8edaccd04bdc" exitCode=0 Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.662118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e8cd7651946b5e61feead5aa4780e3224ab07431ed710fea897d8edaccd04bdc"} Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.662200 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.663771 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.663828 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.663851 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667053 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d"} Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667101 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667168 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667230 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667274 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.667363 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669691 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669763 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669611 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669839 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669861 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669946 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.669985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:17 crc kubenswrapper[4754]: I0105 20:05:17.670009 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.525092 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:14:02.044287752 +0000 UTC Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679318 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a1e7540c3ca0974ed0ba787bce3891f51792e56eef18e80408fe4f92da59d422"} Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679410 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af4ea65f9205e3fddfdc4fe974d1d6ab0fdc9fe93d597a75d400106a559406da"} Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679421 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679437 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68f8bbe575c90284b3f1c9957dae0bb23e1359bed60ea6b6d4b83876096b554e"} Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679537 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.679674 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.680964 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.681035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.681058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.681161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.681207 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:18 crc kubenswrapper[4754]: I0105 20:05:18.681229 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.525889 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:32:12.706414946 +0000 UTC Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.652634 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.714060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9483351948204248622f3938ae234f8dee1b85c13b4ec0f477eea59921041fb"} Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.714129 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.714151 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e378089489a86bee0d83de2eeac844b9c329b9c2d7e8c610b4ad11266cd325d9"} Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.714159 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715689 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715620 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715730 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:19 crc kubenswrapper[4754]: I0105 20:05:19.715750 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.526553 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:38:57.51862809 +0000 UTC Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.717439 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.718954 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.719011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.719028 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.723392 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.723732 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.725502 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.725572 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:20 crc kubenswrapper[4754]: I0105 20:05:20.725592 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.025845 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.527680 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:15:48.981113237 +0000 UTC Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.721102 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.722237 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.722272 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.722283 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.975275 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.975562 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.977243 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.977348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:21 crc kubenswrapper[4754]: I0105 20:05:21.977386 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.528405 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:42:54.594727743 +0000 UTC Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.548380 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.548670 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.550025 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.550074 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.550085 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.827712 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.827994 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.829793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.829874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:22 crc kubenswrapper[4754]: I0105 20:05:22.829912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.134183 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.134448 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.136721 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.136914 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.137071 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.529637 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:58:59.353123477 +0000 UTC Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.558411 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.568151 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:23 crc kubenswrapper[4754]: E0105 20:05:23.690011 4754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.727387 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.729073 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.729133 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:23 crc kubenswrapper[4754]: I0105 20:05:23.729147 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.531338 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:39:17.094985795 +0000 UTC Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.730639 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.732486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.732725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.732955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:24 crc kubenswrapper[4754]: I0105 20:05:24.735893 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.532269 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:24:25.832609508 +0000 UTC Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.548789 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.548912 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.734648 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.736516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.736559 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:25 crc kubenswrapper[4754]: I0105 20:05:25.736576 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:26 crc kubenswrapper[4754]: I0105 20:05:26.519007 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 05 20:05:26 crc kubenswrapper[4754]: E0105 20:05:26.529657 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 05 20:05:26 crc kubenswrapper[4754]: I0105 20:05:26.533857 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:03:49.33282601 +0000 UTC Jan 05 20:05:26 crc kubenswrapper[4754]: E0105 20:05:26.791944 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.047098 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.047196 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.534031 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:05:38.006240961 +0000 UTC Jan 05 20:05:27 crc kubenswrapper[4754]: W0105 20:05:27.606632 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.606930 4754 trace.go:236] Trace[1793527812]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 20:05:17.605) (total time: 10001ms): Jan 05 20:05:27 crc kubenswrapper[4754]: Trace[1793527812]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:05:27.606) Jan 05 20:05:27 crc kubenswrapper[4754]: Trace[1793527812]: [10.001489114s] [10.001489114s] END Jan 05 20:05:27 crc kubenswrapper[4754]: E0105 20:05:27.607122 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.628871 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.629184 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.630635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.630753 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.630816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:27 crc kubenswrapper[4754]: E0105 20:05:27.680943 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1887ee6900a2f329 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 20:05:13.515864873 +0000 UTC m=+0.225048747,LastTimestamp:2026-01-05 20:05:13.515864873 +0000 UTC m=+0.225048747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.841532 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.841658 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.846638 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 20:05:27 crc kubenswrapper[4754]: I0105 20:05:27.846703 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 20:05:28 crc kubenswrapper[4754]: I0105 20:05:28.534832 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:45:02.542660588 +0000 UTC Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.535412 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:50:19.942893391 +0000 UTC Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.992522 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.993694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.993737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.993748 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:29 crc kubenswrapper[4754]: I0105 20:05:29.993781 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:29 crc kubenswrapper[4754]: E0105 20:05:29.995751 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 05 20:05:30 crc kubenswrapper[4754]: I0105 20:05:30.536375 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:54:28.13898081 +0000 UTC Jan 05 20:05:30 crc kubenswrapper[4754]: I0105 20:05:30.820766 4754 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.030040 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.037908 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.523210 4754 apiserver.go:52] "Watching apiserver" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.527710 4754 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.528179 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.528713 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.529022 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.529088 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.529135 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.528796 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:31 crc kubenswrapper[4754]: E0105 20:05:31.529269 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.528835 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:31 crc kubenswrapper[4754]: E0105 20:05:31.529338 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:31 crc kubenswrapper[4754]: E0105 20:05:31.529387 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.534502 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535121 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535462 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535525 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535655 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535682 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.535900 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.537559 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:23:13.158420172 +0000 UTC Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.537841 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.537881 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.578725 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.617825 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.623449 4754 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.649610 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.663629 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.672702 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.680716 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.690889 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.701668 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.712584 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.722428 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.733311 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.744633 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.754893 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:31 crc kubenswrapper[4754]: E0105 20:05:31.755231 4754 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:31 crc kubenswrapper[4754]: I0105 20:05:31.766134 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.538150 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:30:47.423535918 +0000 UTC Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.752445 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.849839 4754 trace.go:236] Trace[395603391]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 20:05:17.897) (total time: 14952ms): Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[395603391]: ---"Objects listed" error: 14952ms (20:05:32.849) Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[395603391]: [14.952508741s] [14.952508741s] END Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.849908 4754 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.850070 4754 trace.go:236] Trace[1550566437]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 20:05:20.093) (total time: 12756ms): Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[1550566437]: ---"Objects listed" error: 12756ms (20:05:32.849) Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[1550566437]: [12.75624455s] [12.75624455s] END Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.850096 4754 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.850660 4754 trace.go:236] Trace[333905628]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 20:05:18.068) (total time: 14782ms): Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[333905628]: ---"Objects listed" error: 14781ms (20:05:32.850) Jan 05 20:05:32 crc kubenswrapper[4754]: Trace[333905628]: [14.782036469s] [14.782036469s] END Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.850688 4754 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.852067 4754 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.853163 4754 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.910804 4754 csr.go:261] certificate signing request csr-45gvs is approved, waiting to be issued Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953153 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953204 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953232 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953253 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953274 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953313 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953334 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953356 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953409 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953433 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953457 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953484 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953510 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953529 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953546 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953565 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953581 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953599 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953616 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953637 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953654 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953670 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953687 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953695 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953713 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953785 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953808 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953845 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953860 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953877 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953861 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953892 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953914 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953930 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953947 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953965 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.953984 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954003 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954021 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954037 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954054 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954087 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954105 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954124 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954140 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954156 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954171 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954186 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954202 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954218 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954333 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954342 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954352 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954402 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954425 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954442 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954459 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954476 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954498 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954521 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954539 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954555 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954573 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954590 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954605 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954623 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954641 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954658 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954674 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954691 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954713 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954742 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954760 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954777 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954794 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954812 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954829 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954848 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954865 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954887 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954909 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954932 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954957 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954983 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955008 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955032 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955057 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955081 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955105 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955127 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955159 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955183 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955206 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955233 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955258 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955338 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955357 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955396 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955415 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955430 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955446 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955462 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955527 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955545 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955562 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955580 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955598 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955616 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955633 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955649 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955667 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955689 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955711 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955735 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955760 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955785 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955810 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955832 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955858 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955880 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955902 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955924 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955949 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955971 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955996 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956020 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956043 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956067 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956090 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956114 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956136 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956164 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956189 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956213 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956239 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956262 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956303 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956329 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956363 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956385 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956412 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956435 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956457 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956479 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956503 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956526 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956550 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956574 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956595 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956618 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956644 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956666 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956689 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956714 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956740 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956765 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956792 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956817 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956840 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956864 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956914 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956940 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956990 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957016 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957056 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957086 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957112 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957137 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957162 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957188 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957214 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957240 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957264 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957308 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957335 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957362 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957389 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957416 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957443 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957478 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957506 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957533 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957556 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957580 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957604 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957628 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957652 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957676 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957701 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957725 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957749 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957773 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957800 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957824 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957876 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957934 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957960 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.957986 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958012 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958073 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958098 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958152 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958176 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958473 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958492 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958505 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954498 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954493 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954539 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954568 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954615 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954825 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.954847 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955001 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955110 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955140 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955183 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955523 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955809 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955853 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.955965 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956269 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.956559 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958173 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958333 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958454 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958571 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958760 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.958887 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959203 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959317 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959314 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959463 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959609 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.959746 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.960537 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.960749 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.960914 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.960946 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961055 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.960995 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961015 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961129 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961351 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961557 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961547 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961562 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.961913 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962048 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962221 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962395 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962407 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962618 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962734 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962754 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.962804 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963031 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963047 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963235 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963557 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: E0105 20:05:32.963636 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:33.463590192 +0000 UTC m=+20.172774156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963681 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.963882 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.964063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.964482 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.964599 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.964816 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.965036 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.965166 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.965350 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: E0105 20:05:32.965398 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:32 crc kubenswrapper[4754]: E0105 20:05:32.965739 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:33.465707257 +0000 UTC m=+20.174891141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966006 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966100 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966448 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966548 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.966563 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.967058 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.967507 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.967795 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.967807 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.967919 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968328 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968382 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968433 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968522 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968547 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968650 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968834 4754 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.968967 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.969254 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.969438 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.969707 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.969932 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.969931 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.970411 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.970503 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.970874 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.970978 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.971066 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.971805 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.972020 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.972233 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.972445 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.972578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.973416 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.974204 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.974880 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.974888 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.974906 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.975209 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.975504 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.975841 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.976070 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.976144 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.976669 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.976997 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.971157 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: E0105 20:05:32.971209 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:32 crc kubenswrapper[4754]: E0105 20:05:32.977097 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:33.477077232 +0000 UTC m=+20.186261106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.977286 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.977338 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.977680 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:32 crc kubenswrapper[4754]: I0105 20:05:32.978934 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.010857 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.010919 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.010943 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.011038 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:33.511013795 +0000 UTC m=+20.220197679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.011797 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.011928 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.012018 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.012212 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:33.512180435 +0000 UTC m=+20.221364319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059570 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059637 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059684 4754 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059697 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059707 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059717 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059727 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059737 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059745 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059753 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059763 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059772 4754 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059780 4754 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059790 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059800 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059809 4754 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059819 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059828 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059838 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059846 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059855 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059864 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059873 4754 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059882 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059891 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059900 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059909 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059917 4754 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059926 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059935 4754 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059944 4754 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059952 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059960 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059970 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059979 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059990 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.059998 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060007 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060015 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060023 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060031 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060040 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060049 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060057 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060067 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060077 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060087 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060096 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060105 4754 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060114 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060123 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060136 4754 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060147 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060157 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060168 4754 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060177 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060186 4754 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060196 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060205 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060213 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060222 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060232 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060241 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060251 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060259 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060268 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060277 4754 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060287 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060313 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060322 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060331 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060340 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060349 4754 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060358 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060366 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060375 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060384 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060392 4754 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060401 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060411 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060420 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060429 4754 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060444 4754 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060452 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060461 4754 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060470 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060479 4754 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060489 4754 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060498 4754 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060506 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060518 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060526 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060535 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060543 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060553 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060562 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060572 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060582 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060592 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060600 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060610 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060618 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060627 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060635 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060646 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060657 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060668 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060678 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060689 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060701 4754 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060526 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060712 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060848 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060877 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060898 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060919 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.060938 4754 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.468438 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.468835 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:34.46874552 +0000 UTC m=+21.177929424 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.469057 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.469257 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.470607 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:34.469534271 +0000 UTC m=+21.178718185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.539063 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:29:17.273031982 +0000 UTC Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.571163 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.571225 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.571267 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571456 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571507 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571532 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571542 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571569 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571589 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571583 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571635 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:34.571606356 +0000 UTC m=+21.280790260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571666 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:34.571653957 +0000 UTC m=+21.280837861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.571732 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:34.571690458 +0000 UTC m=+21.280874442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.588244 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.588387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.588452 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.588510 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.588719 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:33 crc kubenswrapper[4754]: E0105 20:05:33.588815 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.595003 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.596107 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.672259 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.672777 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.672865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.672884 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.672391 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673181 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673431 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673441 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673516 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673589 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.673782 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes/kubernetes.io~projected/kube-api-access-249nr Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673788 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673807 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673575 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673850 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.673888 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes/kubernetes.io~projected/kube-api-access-x7zkh Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673957 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.673954 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.673987 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674017 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674207 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674200 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674678 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674768 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.674953 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~projected/bound-sa-token Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.674987 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.675035 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.675953 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.676597 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.676680 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~secret/marketplace-operator-metrics Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.676695 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.676920 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.677121 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.677454 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.678375 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.678595 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.681270 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes/kubernetes.io~secret/srv-cert Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.681336 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.681654 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~projected/kube-api-access-d6qdx Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.681690 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.699007 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.699992 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700168 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700346 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700495 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700704 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700017 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700260 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700493 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.700829 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701190 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701350 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701483 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701590 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701702 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701814 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701932 4754 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702041 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702186 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702327 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702455 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.701343 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702572 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702597 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702668 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702815 4754 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.702601 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.703677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.703680 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.706056 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.706579 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.710780 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.715510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.716647 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.716686 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.725872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.729914 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.731943 4754 csr.go:257] certificate signing request csr-45gvs is issued Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.732933 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.733568 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.734499 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.735373 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.736008 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.736416 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.736623 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.736930 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.737119 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.742481 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.742606 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.753099 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.755841 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756158 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756285 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756430 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756474 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756345 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.756755 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.759268 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.759315 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760033 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760191 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760178 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760261 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760264 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760312 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760389 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760510 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760689 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.760928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.761113 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.761377 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.761589 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.762132 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.762238 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.762359 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.763532 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.765283 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.765389 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.766903 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.767869 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.768399 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.768500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.771590 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.776499 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.778496 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.780675 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.783023 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.783837 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.791039 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803683 4754 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803716 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803730 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803744 4754 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803767 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803777 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803786 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803798 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803807 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803816 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803825 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803834 4754 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803847 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803859 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803869 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803882 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803892 4754 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803905 4754 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803916 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803926 4754 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803936 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803945 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803954 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803961 4754 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803970 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803977 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803985 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.803994 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804002 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804009 4754 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804017 4754 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804025 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804033 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804040 4754 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804047 4754 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804056 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804064 4754 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804073 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804082 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804090 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804099 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804107 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804115 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804123 4754 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804131 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804142 4754 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804151 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804160 4754 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804170 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804179 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804189 4754 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804198 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804206 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804216 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.804224 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.806506 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.808068 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.816231 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.816957 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.817543 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.819273 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.819959 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.822616 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.823577 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.827422 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.827492 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.828324 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.829607 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.831733 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.857814 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.871834 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.875206 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.875990 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.877670 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.877270 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.882462 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.883055 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.883854 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.885536 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.887350 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.887461 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.887539 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.888589 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.889062 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.889538 4754 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.889635 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.905002 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.905028 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.905038 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.948223 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.955332 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 20:05:33 crc kubenswrapper[4754]: I0105 20:05:33.961260 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 20:05:33 crc kubenswrapper[4754]: W0105 20:05:33.969445 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-30d8295d8961c4986edb729a6208af262de5b7abfbb196a5ee66cb324a9624f9 WatchSource:0}: Error finding container 30d8295d8961c4986edb729a6208af262de5b7abfbb196a5ee66cb324a9624f9: Status 404 returned error can't find the container with id 30d8295d8961c4986edb729a6208af262de5b7abfbb196a5ee66cb324a9624f9 Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.108701 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.109871 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.111844 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.115217 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.116236 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.117726 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.118676 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.120137 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.120887 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.122340 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.123185 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.124552 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.125203 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.126486 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.127256 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.128914 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.129589 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.130815 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.131616 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.132434 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.133745 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.134429 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.160197 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.160282 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.178404 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.184311 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.197051 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.216090 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.229650 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.245818 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.262993 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.283312 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.291997 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.316561 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.337075 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.368324 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.390424 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.406473 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.421273 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.431885 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.511659 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.511803 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.511904 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.511946 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.511907432 +0000 UTC m=+23.221091306 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.512000 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.511990754 +0000 UTC m=+23.221174628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.540204 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:02:01.790383108 +0000 UTC Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.540312 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 113h56m27.250098968s for next certificate rotation Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.586145 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zkfjx"] Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.586985 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pkzls"] Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.587248 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.587801 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.589978 4754 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.590081 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.590485 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nnbc8"] Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.591331 4754 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.591372 4754 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.591390 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.591458 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j88vx"] Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.591443 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.591469 4754 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.591603 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.591745 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.595272 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.595366 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.595483 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.595643 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.595700 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.595646 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.595885 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.596978 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.597089 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.597186 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.597253 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.598061 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.598356 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.598662 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.601432 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.601979 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.602190 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.602317 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.602648 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7bkfj"] Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.603026 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.604370 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.604626 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.606179 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.606177 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.608805 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.608806 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.609052 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.612147 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.612320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612406 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612447 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612464 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612677 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612785 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612807 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612817 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.612938 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.612782306 +0000 UTC m=+23.321966180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.612708 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.613120 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.613110244 +0000 UTC m=+23.322294118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.613206 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.613190886 +0000 UTC m=+23.322374760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738075 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738137 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drk86\" (UniqueName: \"kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738157 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-socket-dir-parent\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738211 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-conf-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738228 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738243 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738278 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ce145f2-f010-4086-963c-23e68ff9e280-mcd-auth-proxy-config\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738308 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-binary-copy\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738325 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738342 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-os-release\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738357 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738373 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738389 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-system-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738648 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738678 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-kubelet\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738718 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1ce145f2-f010-4086-963c-23e68ff9e280-rootfs\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738737 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-k8s-cni-cncf-io\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738755 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-multus-certs\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-etc-kubernetes\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738790 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738820 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6mk\" (UniqueName: \"kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738905 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-cnibin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738922 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738936 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738959 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2bq\" (UniqueName: \"kubernetes.io/projected/1ce145f2-f010-4086-963c-23e68ff9e280-kube-api-access-bc2bq\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.738977 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-bin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739081 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739111 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-cnibin\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739127 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739159 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlxt\" (UniqueName: \"kubernetes.io/projected/a9bfe599-a0e7-471f-9a33-2f095eea69d5-kube-api-access-qzlxt\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-cni-binary-copy\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739209 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739224 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-system-cni-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739249 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-hostroot\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739318 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ce145f2-f010-4086-963c-23e68ff9e280-proxy-tls\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739340 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739378 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-netns\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739436 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9bfe599-a0e7-471f-9a33-2f095eea69d5-hosts-file\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739484 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-os-release\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739500 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.739517 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.741328 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-multus\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.748126 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.757101 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-05 20:00:33 +0000 UTC, rotation deadline is 2026-10-23 11:02:22.016141821 +0000 UTC Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.757177 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6974h56m47.25896768s for next certificate rotation Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.759337 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a593b1d50cdd839089b617db38bd22653e7aeba2991e1553e502e950af08a2f8"} Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.760607 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"30d8295d8961c4986edb729a6208af262de5b7abfbb196a5ee66cb324a9624f9"} Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.761944 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8d9e176d14ac5f39807144d7886693bfb104225257fc6efc2b3d6808bf03c054"} Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.765585 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: E0105 20:05:34.775797 4754 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.781340 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.799820 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zkfjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd02bbe9-6d27-434c-995a-3a2ca424d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drk86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zkfjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.819900 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.838499 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f00f349-5bf1-4023-98da-e7de2a227004\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.841953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842011 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-system-cni-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-hostroot\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842085 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ce145f2-f010-4086-963c-23e68ff9e280-proxy-tls\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842107 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842128 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842169 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9bfe599-a0e7-471f-9a33-2f095eea69d5-hosts-file\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842192 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-os-release\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842272 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-netns\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842365 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842414 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-multus\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842487 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-system-cni-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842530 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-netns\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842585 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842610 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842661 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842700 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-multus\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842780 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842493 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842852 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9bfe599-a0e7-471f-9a33-2f095eea69d5-hosts-file\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842863 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842956 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-hostroot\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.842959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843070 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drk86\" (UniqueName: \"kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843212 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843234 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843246 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t8f2\" (UniqueName: \"kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843261 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843331 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ce145f2-f010-4086-963c-23e68ff9e280-mcd-auth-proxy-config\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843384 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-socket-dir-parent\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-conf-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843460 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843491 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843569 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-conf-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843619 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843560 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-os-release\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843665 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-binary-copy\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843697 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-system-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843740 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-os-release\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843748 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843526 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-socket-dir-parent\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843780 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843821 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-kubelet\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-multus-certs\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843885 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-etc-kubernetes\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843931 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1ce145f2-f010-4086-963c-23e68ff9e280-rootfs\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843935 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1ce145f2-f010-4086-963c-23e68ff9e280-rootfs\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.843988 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-kubelet\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844020 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-k8s-cni-cncf-io\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844034 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-multus-certs\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844038 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844057 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-run-k8s-cni-cncf-io\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844084 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6mk\" (UniqueName: \"kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844108 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-etc-kubernetes\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844114 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844137 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844137 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844167 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-cnibin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844157 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844190 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844206 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844225 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2bq\" (UniqueName: \"kubernetes.io/projected/1ce145f2-f010-4086-963c-23e68ff9e280-kube-api-access-bc2bq\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844251 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-bin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlxt\" (UniqueName: \"kubernetes.io/projected/a9bfe599-a0e7-471f-9a33-2f095eea69d5-kube-api-access-qzlxt\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844361 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ce145f2-f010-4086-963c-23e68ff9e280-mcd-auth-proxy-config\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-cnibin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844484 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-host-var-lib-cni-bin\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844519 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844534 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844560 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-binary-copy\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844574 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-system-cni-dir\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844632 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-cni-binary-copy\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844656 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844700 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-cnibin\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844717 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844859 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.844906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a963e4eb-9f2e-4646-ad12-6a878c888f25-cnibin\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.845054 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd02bbe9-6d27-434c-995a-3a2ca424d245-os-release\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.845488 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-cni-binary-copy\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.845516 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a963e4eb-9f2e-4646-ad12-6a878c888f25-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.848150 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ce145f2-f010-4086-963c-23e68ff9e280-proxy-tls\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.850321 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.857826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.861997 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.862144 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlxt\" (UniqueName: \"kubernetes.io/projected/a9bfe599-a0e7-471f-9a33-2f095eea69d5-kube-api-access-qzlxt\") pod \"node-resolver-7bkfj\" (UID: \"a9bfe599-a0e7-471f-9a33-2f095eea69d5\") " pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.872815 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.881880 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.890036 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7bkfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bfe599-a0e7-471f-9a33-2f095eea69d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qzlxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7bkfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.904891 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.915561 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.924882 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ce145f2-f010-4086-963c-23e68ff9e280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pkzls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.938611 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a963e4eb-9f2e-4646-ad12-6a878c888f25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j88vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.944437 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7bkfj" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.945416 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8f2\" (UniqueName: \"kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.950175 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: W0105 20:05:34.960511 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bfe599_a0e7_471f_9a33_2f095eea69d5.slice/crio-8a2e9d2c82fe6b0f4d0d306c859357993977cb8439b189ed9cbeff9ce5ee3674 WatchSource:0}: Error finding container 8a2e9d2c82fe6b0f4d0d306c859357993977cb8439b189ed9cbeff9ce5ee3674: Status 404 returned error can't find the container with id 8a2e9d2c82fe6b0f4d0d306c859357993977cb8439b189ed9cbeff9ce5ee3674 Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.967888 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f00f349-5bf1-4023-98da-e7de2a227004\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.988154 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:34 crc kubenswrapper[4754]: I0105 20:05:34.998839 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.007214 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zkfjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd02bbe9-6d27-434c-995a-3a2ca424d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drk86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zkfjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.024969 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4d365-a206-444c-b906-46a645aeaaf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nnbc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.035677 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.398236 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.450879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2bq\" (UniqueName: \"kubernetes.io/projected/1ce145f2-f010-4086-963c-23e68ff9e280-kube-api-access-bc2bq\") pod \"machine-config-daemon-pkzls\" (UID: \"1ce145f2-f010-4086-963c-23e68ff9e280\") " pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.464746 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.515570 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.523866 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.525911 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:35 crc kubenswrapper[4754]: W0105 20:05:35.532022 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-ebd7befc1113ba6a5415673ff6111e6f24b63a07c137d759f22dc9a713f3e68a WatchSource:0}: Error finding container ebd7befc1113ba6a5415673ff6111e6f24b63a07c137d759f22dc9a713f3e68a: Status 404 returned error can't find the container with id ebd7befc1113ba6a5415673ff6111e6f24b63a07c137d759f22dc9a713f3e68a Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.536752 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.546597 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6mk\" (UniqueName: \"kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.587033 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.588199 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.588342 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.588442 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.588493 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.588634 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.588695 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.594063 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.595375 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.726997 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.735353 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides\") pod \"ovnkube-node-nnbc8\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.766038 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"e331302bd9d06c17b0799ea286c0d1986a6e0f84dc9014e4f48d395accb930dd"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.766091 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.766107 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"ebd7befc1113ba6a5415673ff6111e6f24b63a07c137d759f22dc9a713f3e68a"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.768347 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b0cfa45bca91975327c4aeebfcf0d968d5a60eb2d529c6377c484788dccd4e3"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.768393 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2e5bed40a576c44dd312187ef601b2d26269d20361ab89a03158eb486b0d8590"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.770752 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9e3b4e462e5b45f48eb88e7754c1e293edc4fbdb2890d013eca82bc533470623"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.773099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7bkfj" event={"ID":"a9bfe599-a0e7-471f-9a33-2f095eea69d5","Type":"ContainerStarted","Data":"7ff4db994ee2fb6451419ba8294323636c8364cb37bdab444ddd59e900023789"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.773135 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7bkfj" event={"ID":"a9bfe599-a0e7-471f-9a33-2f095eea69d5","Type":"ContainerStarted","Data":"8a2e9d2c82fe6b0f4d0d306c859357993977cb8439b189ed9cbeff9ce5ee3674"} Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.811061 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.829929 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7bkfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bfe599-a0e7-471f-9a33-2f095eea69d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qzlxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7bkfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.845352 4754 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.845437 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config podName:fd02bbe9-6d27-434c-995a-3a2ca424d245 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.345419885 +0000 UTC m=+23.054603759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config") pod "multus-zkfjx" (UID: "fd02bbe9-6d27-434c-995a-3a2ca424d245") : failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.859205 4754 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.859248 4754 projected.go:194] Error preparing data for projected volume kube-api-access-drk86 for pod openshift-multus/multus-zkfjx: failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.859356 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86 podName:fd02bbe9-6d27-434c-995a-3a2ca424d245 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.359331687 +0000 UTC m=+23.068515571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-drk86" (UniqueName: "kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86") pod "multus-zkfjx" (UID: "fd02bbe9-6d27-434c-995a-3a2ca424d245") : failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.864128 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.871696 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.873810 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.879155 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: W0105 20:05:35.884926 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d4d365_a206_444c_b906_46a645aeaaf7.slice/crio-5f78fa066f9e3030072cebb2c5e6be0dc88a586a277f6fb4c9420dc8a5133581 WatchSource:0}: Error finding container 5f78fa066f9e3030072cebb2c5e6be0dc88a586a277f6fb4c9420dc8a5133581: Status 404 returned error can't find the container with id 5f78fa066f9e3030072cebb2c5e6be0dc88a586a277f6fb4c9420dc8a5133581 Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.895054 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ce145f2-f010-4086-963c-23e68ff9e280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e331302bd9d06c17b0799ea286c0d1986a6e0f84dc9014e4f48d395accb930dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pkzls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.917943 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a963e4eb-9f2e-4646-ad12-6a878c888f25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j88vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.942508 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.943853 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.956251 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f00f349-5bf1-4023-98da-e7de2a227004\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.961611 4754 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.961653 4754 projected.go:194] Error preparing data for projected volume kube-api-access-2t8f2 for pod openshift-multus/multus-additional-cni-plugins-j88vx: failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: E0105 20:05:35.961708 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2 podName:a963e4eb-9f2e-4646-ad12-6a878c888f25 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:36.461689959 +0000 UTC m=+23.170873833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2t8f2" (UniqueName: "kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2") pod "multus-additional-cni-plugins-j88vx" (UID: "a963e4eb-9f2e-4646-ad12-6a878c888f25") : failed to sync configmap cache: timed out waiting for the condition Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.971493 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:35 crc kubenswrapper[4754]: I0105 20:05:35.995498 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:35Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.007354 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zkfjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd02bbe9-6d27-434c-995a-3a2ca424d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drk86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zkfjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.012480 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.026116 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4d365-a206-444c-b906-46a645aeaaf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nnbc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.042546 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.070241 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.072773 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.087024 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.102201 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7bkfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bfe599-a0e7-471f-9a33-2f095eea69d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff4db994ee2fb6451419ba8294323636c8364cb37bdab444ddd59e900023789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qzlxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7bkfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.119106 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3b4e462e5b45f48eb88e7754c1e293edc4fbdb2890d013eca82bc533470623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.143867 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.159966 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ce145f2-f010-4086-963c-23e68ff9e280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e331302bd9d06c17b0799ea286c0d1986a6e0f84dc9014e4f48d395accb930dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pkzls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.178456 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a963e4eb-9f2e-4646-ad12-6a878c888f25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j88vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.198233 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4d365-a206-444c-b906-46a645aeaaf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nnbc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.213054 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.234437 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f00f349-5bf1-4023-98da-e7de2a227004\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.247879 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b0cfa45bca91975327c4aeebfcf0d968d5a60eb2d529c6377c484788dccd4e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e5bed40a576c44dd312187ef601b2d26269d20361ab89a03158eb486b0d8590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.262780 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.279689 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zkfjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd02bbe9-6d27-434c-995a-3a2ca424d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drk86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zkfjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.360466 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.360549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drk86\" (UniqueName: \"kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.361983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd02bbe9-6d27-434c-995a-3a2ca424d245-multus-daemon-config\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.367059 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drk86\" (UniqueName: \"kubernetes.io/projected/fd02bbe9-6d27-434c-995a-3a2ca424d245-kube-api-access-drk86\") pod \"multus-zkfjx\" (UID: \"fd02bbe9-6d27-434c-995a-3a2ca424d245\") " pod="openshift-multus/multus-zkfjx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.396441 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.398919 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.398982 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.398999 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.399139 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.406832 4754 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.407171 4754 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408461 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zkfjx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408523 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408561 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408579 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408603 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.408621 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.426236 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4483e6cc-d144-4eb0-8ee8-46170e462003\\\",\\\"systemUUID\\\":\\\"d80f8dc7-2091-41c3-9fa5-371478c52560\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: W0105 20:05:36.427689 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd02bbe9_6d27_434c_995a_3a2ca424d245.slice/crio-ab2f6e080322f23a2d00effe5741868fd1b51727c165c5347684035c985dbc3e WatchSource:0}: Error finding container ab2f6e080322f23a2d00effe5741868fd1b51727c165c5347684035c985dbc3e: Status 404 returned error can't find the container with id ab2f6e080322f23a2d00effe5741868fd1b51727c165c5347684035c985dbc3e Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.431146 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.431193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.431210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.431232 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.431250 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.452257 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4483e6cc-d144-4eb0-8ee8-46170e462003\\\",\\\"systemUUID\\\":\\\"d80f8dc7-2091-41c3-9fa5-371478c52560\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.456898 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.456943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.456960 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.456984 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.457001 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.466935 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8f2\" (UniqueName: \"kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.474403 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t8f2\" (UniqueName: \"kubernetes.io/projected/a963e4eb-9f2e-4646-ad12-6a878c888f25-kube-api-access-2t8f2\") pod \"multus-additional-cni-plugins-j88vx\" (UID: \"a963e4eb-9f2e-4646-ad12-6a878c888f25\") " pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.483396 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4483e6cc-d144-4eb0-8ee8-46170e462003\\\",\\\"systemUUID\\\":\\\"d80f8dc7-2091-41c3-9fa5-371478c52560\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.487243 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.487272 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.487535 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.487549 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.487560 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.508490 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4483e6cc-d144-4eb0-8ee8-46170e462003\\\",\\\"systemUUID\\\":\\\"d80f8dc7-2091-41c3-9fa5-371478c52560\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.512206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.512253 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.512263 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.512281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.512348 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.527386 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4483e6cc-d144-4eb0-8ee8-46170e462003\\\",\\\"systemUUID\\\":\\\"d80f8dc7-2091-41c3-9fa5-371478c52560\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.527513 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.529105 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.529133 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.529146 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.529164 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.529177 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.567363 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.567492 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.567528 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.567502375 +0000 UTC m=+27.276686249 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.567590 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.567633 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.567620838 +0000 UTC m=+27.276804712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.631383 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.631424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.631436 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.631452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.631463 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.668476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.668523 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.668566 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668659 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668755 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.668734798 +0000 UTC m=+27.377918672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668668 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668803 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668818 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668871 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.668854361 +0000 UTC m=+27.378038295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668670 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668899 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668908 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:36 crc kubenswrapper[4754]: E0105 20:05:36.668934 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.668925093 +0000 UTC m=+27.378109057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.733339 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.733373 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.733382 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.733396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.733406 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.736912 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j88vx" Jan 05 20:05:36 crc kubenswrapper[4754]: W0105 20:05:36.752583 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda963e4eb_9f2e_4646_ad12_6a878c888f25.slice/crio-7fbc0a843f1987fefad3a6e4e762700df963e34dc4cfc81b1f06a8737af1ee06 WatchSource:0}: Error finding container 7fbc0a843f1987fefad3a6e4e762700df963e34dc4cfc81b1f06a8737af1ee06: Status 404 returned error can't find the container with id 7fbc0a843f1987fefad3a6e4e762700df963e34dc4cfc81b1f06a8737af1ee06 Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.776493 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ba3cae4d6f52f859605f76401ec068f5f35eb4f4e2c065cb334429c66b61136"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.778229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zkfjx" event={"ID":"fd02bbe9-6d27-434c-995a-3a2ca424d245","Type":"ContainerStarted","Data":"dfcbc6c76187a6e9e294463df3222a9a9bbe319c59aea069f4aa2430fd18f64d"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.778257 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zkfjx" event={"ID":"fd02bbe9-6d27-434c-995a-3a2ca424d245","Type":"ContainerStarted","Data":"ab2f6e080322f23a2d00effe5741868fd1b51727c165c5347684035c985dbc3e"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.780602 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" exitCode=0 Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.780819 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.780842 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"5f78fa066f9e3030072cebb2c5e6be0dc88a586a277f6fb4c9420dc8a5133581"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.785732 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerStarted","Data":"7fbc0a843f1987fefad3a6e4e762700df963e34dc4cfc81b1f06a8737af1ee06"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.791874 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3b4e462e5b45f48eb88e7754c1e293edc4fbdb2890d013eca82bc533470623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.805058 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ba3cae4d6f52f859605f76401ec068f5f35eb4f4e2c065cb334429c66b61136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.816789 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ce145f2-f010-4086-963c-23e68ff9e280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e331302bd9d06c17b0799ea286c0d1986a6e0f84dc9014e4f48d395accb930dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pkzls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.828520 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a963e4eb-9f2e-4646-ad12-6a878c888f25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t8f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j88vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.843364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.843425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.843438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.843458 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.843474 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.844535 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4d365-a206-444c-b906-46a645aeaaf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fq6mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nnbc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.860847 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a69bbae-22d1-4837-a7d0-d1f6ee5f8659\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T20:05:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.873706 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f00f349-5bf1-4023-98da-e7de2a227004\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T20:05:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bfe3e82ae99818c49e7c72d1dc4f308d7af95bf28af8043e9e832a1b21abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9856ec7cdbdec899876548644f50704617e6ff1213ca7414204bc629e7c978e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4471dbe13ea078885052e0ac1bd33e3c2546b538a3116b814db463029020e16a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T20:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T20:05:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T20:05:36Z is after 2025-08-24T17:21:41Z" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.948250 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.948366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.948388 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.948414 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.948445 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:36Z","lastTransitionTime":"2026-01-05T20:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.990928 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7bkfj" podStartSLOduration=4.990898287 podStartE2EDuration="4.990898287s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:36.980429755 +0000 UTC m=+23.689613629" watchObservedRunningTime="2026-01-05 20:05:36.990898287 +0000 UTC m=+23.700082161" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.991681 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lp8pt"] Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.992265 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.995040 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.995328 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.995623 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 20:05:36 crc kubenswrapper[4754]: I0105 20:05:36.995770 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.048321 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podStartSLOduration=4.04828307 podStartE2EDuration="4.04828307s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.027384886 +0000 UTC m=+23.736568760" watchObservedRunningTime="2026-01-05 20:05:37.04828307 +0000 UTC m=+23.757466944" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.050148 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.050176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.050184 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.050198 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.050208 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.064464 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zkfjx" podStartSLOduration=4.06444197 podStartE2EDuration="4.06444197s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.064397549 +0000 UTC m=+23.773581423" watchObservedRunningTime="2026-01-05 20:05:37.06444197 +0000 UTC m=+23.773625854" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.072411 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trscs\" (UniqueName: \"kubernetes.io/projected/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-kube-api-access-trscs\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.072501 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-host\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.072524 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-serviceca\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.143606 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=6.143590039 podStartE2EDuration="6.143590039s" podCreationTimestamp="2026-01-05 20:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.143209109 +0000 UTC m=+23.852392983" watchObservedRunningTime="2026-01-05 20:05:37.143590039 +0000 UTC m=+23.852773913" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.153421 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.153458 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.153470 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.153489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.153504 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.165751 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=3.165730394 podStartE2EDuration="3.165730394s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.165215751 +0000 UTC m=+23.874399625" watchObservedRunningTime="2026-01-05 20:05:37.165730394 +0000 UTC m=+23.874914268" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.174067 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trscs\" (UniqueName: \"kubernetes.io/projected/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-kube-api-access-trscs\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.174122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-host\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.174150 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-serviceca\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.175219 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-serviceca\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.175488 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-host\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.191730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trscs\" (UniqueName: \"kubernetes.io/projected/e2380af6-8db8-4a24-99c9-57ca3d9d14c8-kube-api-access-trscs\") pod \"node-ca-lp8pt\" (UID: \"e2380af6-8db8-4a24-99c9-57ca3d9d14c8\") " pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.255391 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc"] Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.255823 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: W0105 20:05:37.259558 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.259603 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:37 crc kubenswrapper[4754]: W0105 20:05:37.259670 4754 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.259683 4754 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.260672 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.260717 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.260730 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.260745 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.260755 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.274618 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.274648 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9p52\" (UniqueName: \"kubernetes.io/projected/760fbd0b-95e8-482b-83ef-449d01298157-kube-api-access-c9p52\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.274672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.274702 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.302844 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9vzl4"] Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.303319 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.303388 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.307915 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lp8pt" Jan 05 20:05:37 crc kubenswrapper[4754]: W0105 20:05:37.319337 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2380af6_8db8_4a24_99c9_57ca3d9d14c8.slice/crio-0742e487f3f4e3ad65290eb536cfe1bafb83a8bb74258418840ed75e6788641c WatchSource:0}: Error finding container 0742e487f3f4e3ad65290eb536cfe1bafb83a8bb74258418840ed75e6788641c: Status 404 returned error can't find the container with id 0742e487f3f4e3ad65290eb536cfe1bafb83a8bb74258418840ed75e6788641c Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.363166 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.363199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.363208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.363222 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.363232 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnv2\" (UniqueName: \"kubernetes.io/projected/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-kube-api-access-qqnv2\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9p52\" (UniqueName: \"kubernetes.io/projected/760fbd0b-95e8-482b-83ef-449d01298157-kube-api-access-c9p52\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375411 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.375495 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.376016 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.376182 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/760fbd0b-95e8-482b-83ef-449d01298157-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.391221 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9p52\" (UniqueName: \"kubernetes.io/projected/760fbd0b-95e8-482b-83ef-449d01298157-kube-api-access-c9p52\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.466498 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.466631 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.466649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.466674 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.466691 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.476596 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqnv2\" (UniqueName: \"kubernetes.io/projected/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-kube-api-access-qqnv2\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.476655 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.476804 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.476886 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs podName:fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:37.976865037 +0000 UTC m=+24.686048941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs") pod "network-metrics-daemon-9vzl4" (UID: "fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.498358 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqnv2\" (UniqueName: \"kubernetes.io/projected/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-kube-api-access-qqnv2\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.569217 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.569264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.569280 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.569352 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.569370 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.588390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.588427 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.588407 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.588599 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.588731 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.588865 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.661095 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.673439 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.673478 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.673495 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.673519 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.673539 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.686231 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.689432 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.738156 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.738128882 podStartE2EDuration="738.128882ms" podCreationTimestamp="2026-01-05 20:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.736780847 +0000 UTC m=+24.445964781" watchObservedRunningTime="2026-01-05 20:05:37.738128882 +0000 UTC m=+24.447312786" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.776018 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.776057 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.776068 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.776084 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.776096 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.796452 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="dcdfa80364d8db3fa05e7a55ec8bdcceaec821a1dc9a55da995e010f6fad26d9" exitCode=0 Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.796504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"dcdfa80364d8db3fa05e7a55ec8bdcceaec821a1dc9a55da995e010f6fad26d9"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801069 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801104 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801116 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801136 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.801145 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.802768 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lp8pt" event={"ID":"e2380af6-8db8-4a24-99c9-57ca3d9d14c8","Type":"ContainerStarted","Data":"4f829f027d40d09fd592171a0b7de57146381b9de1c2861e00307106c300d3c0"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.802807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lp8pt" event={"ID":"e2380af6-8db8-4a24-99c9-57ca3d9d14c8","Type":"ContainerStarted","Data":"0742e487f3f4e3ad65290eb536cfe1bafb83a8bb74258418840ed75e6788641c"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.833203 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lp8pt" podStartSLOduration=5.833183064 podStartE2EDuration="5.833183064s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:37.832140877 +0000 UTC m=+24.541324761" watchObservedRunningTime="2026-01-05 20:05:37.833183064 +0000 UTC m=+24.542366938" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.878867 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.878926 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.878935 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.878949 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.878982 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981048 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981076 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981083 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981096 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981105 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:37Z","lastTransitionTime":"2026-01-05T20:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:37 crc kubenswrapper[4754]: I0105 20:05:37.981814 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.982124 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:37 crc kubenswrapper[4754]: E0105 20:05:37.982263 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs podName:fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:38.982245101 +0000 UTC m=+25.691428975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs") pod "network-metrics-daemon-9vzl4" (UID: "fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.083930 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.083967 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.083979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.083996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.084009 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.186640 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.186673 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.186685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.186703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.186714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.288655 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.288698 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.288710 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.288727 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.288744 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: E0105 20:05:38.376494 4754 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 05 20:05:38 crc kubenswrapper[4754]: E0105 20:05:38.376584 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert podName:760fbd0b-95e8-482b-83ef-449d01298157 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:38.876565397 +0000 UTC m=+25.585749271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-rzpjc" (UID: "760fbd0b-95e8-482b-83ef-449d01298157") : failed to sync secret cache: timed out waiting for the condition Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.391066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.391117 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.391132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.391150 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.391163 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.494912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.495001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.495020 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.495047 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.495067 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.577893 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.598185 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.598261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.598281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.598351 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.598374 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.702208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.702275 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.702403 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.702492 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.702523 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.785990 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.806070 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.806497 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.806711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.806929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.807138 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.809022 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="0e0352f9bed82478f36080ce88b00129d515aa9b36295cb5e75940f03e36c31d" exitCode=0 Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.809283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"0e0352f9bed82478f36080ce88b00129d515aa9b36295cb5e75940f03e36c31d"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.890666 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.895899 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/760fbd0b-95e8-482b-83ef-449d01298157-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpjc\" (UID: \"760fbd0b-95e8-482b-83ef-449d01298157\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.910827 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.910870 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.910885 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.910907 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.910924 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:38Z","lastTransitionTime":"2026-01-05T20:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:38 crc kubenswrapper[4754]: I0105 20:05:38.991308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:38 crc kubenswrapper[4754]: E0105 20:05:38.991457 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:38 crc kubenswrapper[4754]: E0105 20:05:38.991509 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs podName:fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:40.991495021 +0000 UTC m=+27.700678905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs") pod "network-metrics-daemon-9vzl4" (UID: "fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.012741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.012770 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.012781 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.012797 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.012809 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.072137 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" Jan 05 20:05:39 crc kubenswrapper[4754]: W0105 20:05:39.091764 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod760fbd0b_95e8_482b_83ef_449d01298157.slice/crio-2d2866632cdbf94738cffcffa79fd67cef35fd4ba2e74ef54fce1c9412cc6264 WatchSource:0}: Error finding container 2d2866632cdbf94738cffcffa79fd67cef35fd4ba2e74ef54fce1c9412cc6264: Status 404 returned error can't find the container with id 2d2866632cdbf94738cffcffa79fd67cef35fd4ba2e74ef54fce1c9412cc6264 Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.116354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.116406 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.116423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.116448 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.116467 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.218587 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.218656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.218671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.218692 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.218704 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.321676 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.321722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.321743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.321768 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.321787 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.424424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.424466 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.424474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.424491 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.424501 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.527318 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.527352 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.527362 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.527378 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.527390 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.588541 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:39 crc kubenswrapper[4754]: E0105 20:05:39.589280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.588568 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.589611 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:39 crc kubenswrapper[4754]: E0105 20:05:39.589800 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:39 crc kubenswrapper[4754]: E0105 20:05:39.589736 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.589961 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:39 crc kubenswrapper[4754]: E0105 20:05:39.590150 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.631175 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.631248 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.631272 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.631338 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.631366 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.735549 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.735908 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.736115 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.736347 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.736578 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.817749 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="d4c2f1cc21890d46723f5c482af1b341ec0d8074181719ee6bfedb0cc2ca48be" exitCode=0 Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.817903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"d4c2f1cc21890d46723f5c482af1b341ec0d8074181719ee6bfedb0cc2ca48be"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.822717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" event={"ID":"760fbd0b-95e8-482b-83ef-449d01298157","Type":"ContainerStarted","Data":"630b2bd45e20769e0cc30fe4b2bcfd466cf4206c809977e0cffd7b899ee17e02"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.822781 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" event={"ID":"760fbd0b-95e8-482b-83ef-449d01298157","Type":"ContainerStarted","Data":"2d2866632cdbf94738cffcffa79fd67cef35fd4ba2e74ef54fce1c9412cc6264"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.839199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.839280 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.839402 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.839438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.839461 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.942491 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.942564 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.942595 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.942627 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:39 crc kubenswrapper[4754]: I0105 20:05:39.942652 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:39Z","lastTransitionTime":"2026-01-05T20:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.045709 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.045765 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.045788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.045819 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.045844 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.149637 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.149719 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.149740 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.149766 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.149786 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.253274 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.253358 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.253375 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.253480 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.253511 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.355760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.355815 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.355833 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.355856 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.355872 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.458571 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.458635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.458652 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.458678 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.458697 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.561471 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.561515 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.561522 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.561537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.561546 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.613662 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.613841 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.613899 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:48.613868256 +0000 UTC m=+35.323052140 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.613981 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.614047 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:48.61402896 +0000 UTC m=+35.323212844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.664955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.665016 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.665035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.665061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.665082 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.717982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.718086 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.718176 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718236 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718365 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718411 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718433 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718467 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:48.718408845 +0000 UTC m=+35.427592759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718475 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718509 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718509 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:48.718486697 +0000 UTC m=+35.427670571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718531 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:40 crc kubenswrapper[4754]: E0105 20:05:40.718619 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:48.71859287 +0000 UTC m=+35.427776774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.768277 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.768364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.768376 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.768393 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.768408 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.828473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" event={"ID":"760fbd0b-95e8-482b-83ef-449d01298157","Type":"ContainerStarted","Data":"71cdd8dcf61612ec786d667f0b3796ba2e5d2c64b7ae098796c0cce5f9e1b1e3"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.835509 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.839851 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="2e932e4edc5ce6aaa0094f2570df41b8d22239ef18845063552820e783ebf375" exitCode=0 Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.839883 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"2e932e4edc5ce6aaa0094f2570df41b8d22239ef18845063552820e783ebf375"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.873216 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.873256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.873266 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.873284 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.873332 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.875495 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpjc" podStartSLOduration=6.875478471 podStartE2EDuration="6.875478471s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:40.847113533 +0000 UTC m=+27.556297407" watchObservedRunningTime="2026-01-05 20:05:40.875478471 +0000 UTC m=+27.584662345" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.976820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.976864 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.976876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.976890 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:40 crc kubenswrapper[4754]: I0105 20:05:40.976899 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:40Z","lastTransitionTime":"2026-01-05T20:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.027807 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.027960 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.028147 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs podName:fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:45.028132761 +0000 UTC m=+31.737316635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs") pod "network-metrics-daemon-9vzl4" (UID: "fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.079432 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.079489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.079500 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.079519 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.079532 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.182959 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.183020 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.183031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.183058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.183079 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.285956 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.286029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.286093 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.286122 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.286189 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.389415 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.389461 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.389471 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.389486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.389499 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.492535 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.492610 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.492631 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.492660 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.492679 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.588091 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.588215 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.588431 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.589000 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.589113 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.589187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.589279 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:41 crc kubenswrapper[4754]: E0105 20:05:41.591479 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.596644 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.596686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.596720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.596741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.596758 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.699711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.699760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.699774 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.699793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.699806 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.803084 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.803124 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.803132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.803148 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.803158 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.846540 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="2c98ab77f062f3b07f4dca24bf4535271a6cbcbe8633d3e69b9f639bc21af91b" exitCode=0 Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.846953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"2c98ab77f062f3b07f4dca24bf4535271a6cbcbe8633d3e69b9f639bc21af91b"} Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.907176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.907210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.907219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.907231 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:41 crc kubenswrapper[4754]: I0105 20:05:41.907239 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:41Z","lastTransitionTime":"2026-01-05T20:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.010348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.010379 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.010390 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.010407 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.010418 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.114813 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.114887 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.114908 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.115013 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.115038 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.218397 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.218435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.218446 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.218462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.218474 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.321017 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.321051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.321060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.321073 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.321081 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.424264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.424417 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.424442 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.424474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.424498 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.526881 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.526950 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.526975 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.527004 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.527027 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.630128 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.630788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.631003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.631203 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.631424 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.736004 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.736060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.736078 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.736103 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.736121 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.838874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.838939 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.838959 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.838985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.839003 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.942127 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.942183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.942197 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.942216 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:42 crc kubenswrapper[4754]: I0105 20:05:42.942228 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:42Z","lastTransitionTime":"2026-01-05T20:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.045555 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.045618 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.045637 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.045661 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.045680 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.149329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.149386 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.149408 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.149432 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.149449 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.253097 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.253170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.253196 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.253229 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.253257 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.356557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.356636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.356660 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.356688 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.356709 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.442226 4754 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.469726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.469792 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.469816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.469849 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.469872 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.573069 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.573121 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.573139 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.573163 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.573180 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.587839 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:43 crc kubenswrapper[4754]: E0105 20:05:43.592139 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.592843 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.592915 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:43 crc kubenswrapper[4754]: E0105 20:05:43.593074 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.593135 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:43 crc kubenswrapper[4754]: E0105 20:05:43.593256 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:43 crc kubenswrapper[4754]: E0105 20:05:43.593419 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.675666 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.675713 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.675732 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.675754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.675770 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.778246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.778424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.778441 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.778460 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.778471 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.859068 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerStarted","Data":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.859597 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.864107 4754 generic.go:334] "Generic (PLEG): container finished" podID="a963e4eb-9f2e-4646-ad12-6a878c888f25" containerID="56ff9cb6ffe9ad004d26c2d822eae7d7d4d139e7e1211a53eab5b117798ed0cc" exitCode=0 Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.864149 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerDied","Data":"56ff9cb6ffe9ad004d26c2d822eae7d7d4d139e7e1211a53eab5b117798ed0cc"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.883144 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.883218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.883245 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.883277 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.883344 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.893025 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.897020 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podStartSLOduration=10.897004777 podStartE2EDuration="10.897004777s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:43.896679719 +0000 UTC m=+30.605863623" watchObservedRunningTime="2026-01-05 20:05:43.897004777 +0000 UTC m=+30.606188651" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.940894 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.986878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.986941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.986960 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.986992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:43 crc kubenswrapper[4754]: I0105 20:05:43.987013 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:43Z","lastTransitionTime":"2026-01-05T20:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.089510 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.089558 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.089569 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.089584 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.089596 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.191639 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.191713 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.191727 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.191746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.191758 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.309658 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.309726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.309746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.309771 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.309788 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.412929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.412990 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.413013 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.413040 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.413062 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.516651 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.516703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.516721 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.516743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.516761 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.620422 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.620486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.620504 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.620526 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.620543 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.723952 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.724028 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.724050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.724081 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.724104 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.826995 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.827060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.827078 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.827100 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.827119 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.868537 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.887920 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.930565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.930663 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.930692 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.930722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:44 crc kubenswrapper[4754]: I0105 20:05:44.930743 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:44Z","lastTransitionTime":"2026-01-05T20:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.033504 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.033600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.033626 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.033662 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.033688 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.076205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.076443 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.076562 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs podName:fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1 nodeName:}" failed. No retries permitted until 2026-01-05 20:05:53.076531396 +0000 UTC m=+39.785715310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs") pod "network-metrics-daemon-9vzl4" (UID: "fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.136892 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.136974 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.136994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.137023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.137042 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.239822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.239866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.239877 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.239895 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.239906 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.342109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.342139 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.342147 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.342160 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.342169 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.444168 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.444231 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.444245 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.444261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.444317 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.547153 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.547206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.547231 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.547256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.547273 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.588643 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.588816 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.588882 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.588980 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.589114 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.589195 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.589206 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.589280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.650179 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.650215 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.650225 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.650239 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.650248 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.753145 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.753212 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.753230 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.753258 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.753276 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.800592 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9vzl4"] Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.856170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.856208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.856219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.856234 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.856246 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.875743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j88vx" event={"ID":"a963e4eb-9f2e-4646-ad12-6a878c888f25","Type":"ContainerStarted","Data":"d2b46b7d9ed92a3afe68e6c2d0317472e19c158ae10a4967495301b4e566569c"} Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.876183 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:45 crc kubenswrapper[4754]: E0105 20:05:45.876856 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.959805 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.959838 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.959848 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.959864 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:45 crc kubenswrapper[4754]: I0105 20:05:45.959875 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:45Z","lastTransitionTime":"2026-01-05T20:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.062007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.062078 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.062097 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.062120 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.062137 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.165336 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.165395 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.165411 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.165437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.165454 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.268760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.268836 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.268856 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.268882 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.268900 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.371022 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.371064 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.371074 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.371090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.371103 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.473124 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.473164 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.473176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.473190 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.473198 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.575990 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.576055 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.576072 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.576095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.576115 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.678208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.678265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.678282 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.678344 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.678369 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.713497 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.713565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.713584 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.713610 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.713628 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T20:05:46Z","lastTransitionTime":"2026-01-05T20:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.767945 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j88vx" podStartSLOduration=13.767918018 podStartE2EDuration="13.767918018s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:45.901521544 +0000 UTC m=+32.610705418" watchObservedRunningTime="2026-01-05 20:05:46.767918018 +0000 UTC m=+33.477101902" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.769109 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2"] Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.769651 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.771157 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.772565 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.773579 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.775847 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.895988 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8324ea31-da71-4a3b-8085-8a771c9eda9c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.896086 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8324ea31-da71-4a3b-8085-8a771c9eda9c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.896258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.896424 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.896492 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8324ea31-da71-4a3b-8085-8a771c9eda9c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997462 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8324ea31-da71-4a3b-8085-8a771c9eda9c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997552 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8324ea31-da71-4a3b-8085-8a771c9eda9c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997620 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8324ea31-da71-4a3b-8085-8a771c9eda9c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997691 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997830 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.997916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.998191 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8324ea31-da71-4a3b-8085-8a771c9eda9c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:46 crc kubenswrapper[4754]: I0105 20:05:46.998670 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8324ea31-da71-4a3b-8085-8a771c9eda9c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.011107 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8324ea31-da71-4a3b-8085-8a771c9eda9c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.020034 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8324ea31-da71-4a3b-8085-8a771c9eda9c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8j8j2\" (UID: \"8324ea31-da71-4a3b-8085-8a771c9eda9c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.090703 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" Jan 05 20:05:47 crc kubenswrapper[4754]: W0105 20:05:47.123917 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8324ea31_da71_4a3b_8085_8a771c9eda9c.slice/crio-f0cdcafb43cb499f0115402c4b0fb43c123cd4da63e14698aaee3f11fac858c3 WatchSource:0}: Error finding container f0cdcafb43cb499f0115402c4b0fb43c123cd4da63e14698aaee3f11fac858c3: Status 404 returned error can't find the container with id f0cdcafb43cb499f0115402c4b0fb43c123cd4da63e14698aaee3f11fac858c3 Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.588068 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.588068 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.588210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:47 crc kubenswrapper[4754]: E0105 20:05:47.588268 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.588387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:47 crc kubenswrapper[4754]: E0105 20:05:47.588554 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:47 crc kubenswrapper[4754]: E0105 20:05:47.588680 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:47 crc kubenswrapper[4754]: E0105 20:05:47.588865 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:47 crc kubenswrapper[4754]: I0105 20:05:47.885947 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" event={"ID":"8324ea31-da71-4a3b-8085-8a771c9eda9c","Type":"ContainerStarted","Data":"f0cdcafb43cb499f0115402c4b0fb43c123cd4da63e14698aaee3f11fac858c3"} Jan 05 20:05:48 crc kubenswrapper[4754]: I0105 20:05:48.619615 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.619834 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.619790283 +0000 UTC m=+51.328974177 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:48 crc kubenswrapper[4754]: I0105 20:05:48.619959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.620154 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.620221 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.620209263 +0000 UTC m=+51.329393147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: I0105 20:05:48.720656 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:48 crc kubenswrapper[4754]: I0105 20:05:48.720728 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:48 crc kubenswrapper[4754]: I0105 20:05:48.720802 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721021 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721236 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721255 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721348 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.721326323 +0000 UTC m=+51.430510227 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721420 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721462 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.721446667 +0000 UTC m=+51.430630571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721776 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721843 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.721870 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:48 crc kubenswrapper[4754]: E0105 20:05:48.722004 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.7219643 +0000 UTC m=+51.431148214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.588126 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.588160 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.588264 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:49 crc kubenswrapper[4754]: E0105 20:05:49.588283 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.588376 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:49 crc kubenswrapper[4754]: E0105 20:05:49.588469 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 20:05:49 crc kubenswrapper[4754]: E0105 20:05:49.588602 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9vzl4" podUID="fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1" Jan 05 20:05:49 crc kubenswrapper[4754]: E0105 20:05:49.588823 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.592776 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.592966 4754 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.634570 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.635119 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.638262 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.638881 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.642984 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.643886 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.644396 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.645359 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.646180 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hz59l"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.654103 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.654703 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.655894 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.656077 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.656208 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.656271 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.656565 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.656743 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.663149 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.663236 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.663279 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.663493 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.663534 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.665663 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.665837 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.666221 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.666360 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.666462 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.666568 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.667049 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.673795 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pjzck"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.674533 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.674901 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.675390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.676835 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.680524 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.677021 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-q6l9h"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.677137 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.677226 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.680388 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.680450 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.681087 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.681396 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.682543 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.687820 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689351 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pxqjg"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.688138 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.688265 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.688406 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.688511 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.688805 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689531 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689647 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689739 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689879 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.689882 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.702018 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xnn4d"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.714476 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.723535 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.724569 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.724595 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.727676 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.727903 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.756491 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757045 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757135 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757208 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757414 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757705 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757748 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757423 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757861 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757967 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.758414 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.757708 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.759328 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.759893 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.760169 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.761008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.762365 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.762615 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bxhwf"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.762952 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763054 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763092 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763174 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763344 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763350 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763450 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.763969 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.764080 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.764166 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.764504 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.764749 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7nk2"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765205 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765314 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5pm\" (UniqueName: \"kubernetes.io/projected/138601a3-19fd-44e5-b817-49b048fe3e88-kube-api-access-cm5pm\") pod \"downloads-7954f5f757-q6l9h\" (UID: \"138601a3-19fd-44e5-b817-49b048fe3e88\") " pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765330 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765369 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765569 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765619 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d43638d3-9f97-4bc9-a40a-57280a8ed643-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbkj\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-kube-api-access-dnbkj\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765698 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d43638d3-9f97-4bc9-a40a-57280a8ed643-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765726 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-config\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765747 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765810 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765827 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765868 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b6b3476-9191-4367-9437-ee9db002d523-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765963 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wgc\" (UniqueName: \"kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.765995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766144 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4rsd7"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766348 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8tp\" (UniqueName: \"kubernetes.io/projected/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-kube-api-access-5s8tp\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766441 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766522 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpn4\" (UniqueName: \"kubernetes.io/projected/3b6b3476-9191-4367-9437-ee9db002d523-kube-api-access-nwpn4\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766591 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7lhn\" (UniqueName: \"kubernetes.io/projected/3eb6b845-fab0-4359-87bd-17a33f9e78ca-kube-api-access-q7lhn\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766255 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766710 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb6b845-fab0-4359-87bd-17a33f9e78ca-serving-cert\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766800 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/41bc6c99-ab80-4d88-841e-7472bf4ace8c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766877 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766937 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766890 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxn7v\" (UniqueName: \"kubernetes.io/projected/d43638d3-9f97-4bc9-a40a-57280a8ed643-kube-api-access-sxn7v\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9j4\" (UniqueName: \"kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767056 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767077 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41bc6c99-ab80-4d88-841e-7472bf4ace8c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767100 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766484 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.766529 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767559 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767824 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.768175 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vnrxg"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.767835 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.768251 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.768488 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.768688 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.769011 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dx842"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.769079 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.769230 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.769553 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.770008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.770168 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.771342 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.771755 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.772076 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.772197 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.772306 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.772840 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.775341 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.776164 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.777963 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.778197 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.778347 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.802869 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803211 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803266 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803397 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803637 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803712 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803948 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.803996 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.804499 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.804735 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.804869 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.805587 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.805904 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.805941 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.806061 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.806280 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.806335 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.806389 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.807890 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.808431 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817450 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817709 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817788 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817853 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817906 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.817909 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.820136 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.820326 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.821997 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.822120 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.823175 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.823746 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.824378 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.824714 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.824794 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.826180 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.827235 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.827388 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.827498 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.827630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.827651 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.828156 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.828483 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.828533 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.830411 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.830534 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pjzck"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.830911 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hz59l"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.830927 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.830992 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.833262 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.834277 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.835159 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.835406 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-csf7z"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.836286 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.837392 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.837625 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.840702 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.845894 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.846904 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.848426 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7nk2"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.854276 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7rbg4"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.876128 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.876189 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rqlrs"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.876697 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.877950 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d43638d3-9f97-4bc9-a40a-57280a8ed643-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878892 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-serving-cert\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnbkj\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-kube-api-access-dnbkj\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d43638d3-9f97-4bc9-a40a-57280a8ed643-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878971 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.878995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bxg\" (UniqueName: \"kubernetes.io/projected/2dc37596-5879-4c76-be1b-5c95376cf1f2-kube-api-access-p7bxg\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879020 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-config\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879039 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dc37596-5879-4c76-be1b-5c95376cf1f2-metrics-tls\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879074 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e67cd8-254d-4921-9140-64a80c3d3690-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879121 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e67cd8-254d-4921-9140-64a80c3d3690-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879140 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swl46\" (UniqueName: \"kubernetes.io/projected/51e67cd8-254d-4921-9140-64a80c3d3690-kube-api-access-swl46\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879189 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879211 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-config\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0881b9ae-bc95-42f6-8dd3-73a86569dde7-proxy-tls\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879256 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsc6v\" (UniqueName: \"kubernetes.io/projected/0881b9ae-bc95-42f6-8dd3-73a86569dde7-kube-api-access-dsc6v\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879277 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.879357 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880318 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d43638d3-9f97-4bc9-a40a-57280a8ed643-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880493 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b6b3476-9191-4367-9437-ee9db002d523-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880542 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzjzh\" (UniqueName: \"kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880587 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-client\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhw22\" (UniqueName: \"kubernetes.io/projected/24911cb5-f93b-4852-adbc-d06821f34d47-kube-api-access-bhw22\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743b4687-81ef-4cd4-9910-dc7ba348e457-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880689 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-policies\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7wgc\" (UniqueName: \"kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880744 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880780 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880813 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880836 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-client\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfrt\" (UniqueName: \"kubernetes.io/projected/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-kube-api-access-4qfrt\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880909 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880933 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8tp\" (UniqueName: \"kubernetes.io/projected/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-kube-api-access-5s8tp\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880929 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.880955 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d2a2026b-4828-435f-ba14-941e12d3ea36-machine-approver-tls\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881079 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-config\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881128 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881197 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-auth-proxy-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881349 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881355 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881400 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vc6\" (UniqueName: \"kubernetes.io/projected/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-kube-api-access-g9vc6\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881522 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-images\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881546 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881578 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwpn4\" (UniqueName: \"kubernetes.io/projected/3b6b3476-9191-4367-9437-ee9db002d523-kube-api-access-nwpn4\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881600 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/743b4687-81ef-4cd4-9910-dc7ba348e457-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881621 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmjq5\" (UniqueName: \"kubernetes.io/projected/d2a2026b-4828-435f-ba14-941e12d3ea36-kube-api-access-zmjq5\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881647 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881666 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317dbe-a59b-4fc6-938d-91a19217745d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881698 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lhn\" (UniqueName: \"kubernetes.io/projected/3eb6b845-fab0-4359-87bd-17a33f9e78ca-kube-api-access-q7lhn\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-encryption-config\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881753 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24911cb5-f93b-4852-adbc-d06821f34d47-config\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881778 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlpn\" (UniqueName: \"kubernetes.io/projected/1968df11-e45d-47a0-a3bd-5dad31d14c8c-kube-api-access-8hlpn\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb6b845-fab0-4359-87bd-17a33f9e78ca-serving-cert\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881835 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/41bc6c99-ab80-4d88-841e-7472bf4ace8c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881863 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881887 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881921 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881948 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxn7v\" (UniqueName: \"kubernetes.io/projected/d43638d3-9f97-4bc9-a40a-57280a8ed643-kube-api-access-sxn7v\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.881977 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882001 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-trusted-ca\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882024 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882047 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882074 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-dir\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9j4\" (UniqueName: \"kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882168 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-service-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnr4\" (UniqueName: \"kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882237 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-config\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882331 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968df11-e45d-47a0-a3bd-5dad31d14c8c-serving-cert\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882357 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882387 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-serving-cert\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882408 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882433 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41bc6c99-ab80-4d88-841e-7472bf4ace8c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882453 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882476 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317dbe-a59b-4fc6-938d-91a19217745d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882497 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkw6m\" (UniqueName: \"kubernetes.io/projected/296f71e4-2a83-467e-b40a-87b1e40330b9-kube-api-access-zkw6m\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882514 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882543 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882572 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882604 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881b9ae-bc95-42f6-8dd3-73a86569dde7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882624 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5pm\" (UniqueName: \"kubernetes.io/projected/138601a3-19fd-44e5-b817-49b048fe3e88-kube-api-access-cm5pm\") pod \"downloads-7954f5f757-q6l9h\" (UID: \"138601a3-19fd-44e5-b817-49b048fe3e88\") " pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882675 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882698 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743b4687-81ef-4cd4-9910-dc7ba348e457-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882720 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24911cb5-f93b-4852-adbc-d06821f34d47-serving-cert\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882783 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882807 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5q48\" (UniqueName: \"kubernetes.io/projected/40317dbe-a59b-4fc6-938d-91a19217745d-kube-api-access-q5q48\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.882828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-config\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.884483 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.886063 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.886067 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.887627 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.889394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.889118 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-config\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.889640 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.889983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eb6b845-fab0-4359-87bd-17a33f9e78ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.890528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.891734 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.891779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.892517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.892896 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.893027 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/41bc6c99-ab80-4d88-841e-7472bf4ace8c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.893185 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d43638d3-9f97-4bc9-a40a-57280a8ed643-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.893912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b6b3476-9191-4367-9437-ee9db002d523-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.894473 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41bc6c99-ab80-4d88-841e-7472bf4ace8c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.895700 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3eb6b845-fab0-4359-87bd-17a33f9e78ca-serving-cert\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.896391 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.897541 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.899392 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.901799 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9xwbp"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.904007 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.904035 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" event={"ID":"8324ea31-da71-4a3b-8085-8a771c9eda9c","Type":"ContainerStarted","Data":"00bb0ad45a43779bde1d038e167d27fa0ddd619bad5627904934976630bd4a71"} Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.904148 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.904454 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m7czd"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.905235 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.905281 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.908105 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.908138 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.908150 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bxhwf"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.910221 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q6l9h"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.910536 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pxqjg"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.911540 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.912480 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dx842"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.913496 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.914465 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5vs9t"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.915453 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.915448 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4rsd7"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.916396 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.917168 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.917301 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njtd4"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.918231 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.918325 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.919201 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.920139 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9xwbp"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.921155 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.922087 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xnn4d"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.923049 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.923962 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.924925 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.925845 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7rbg4"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.926792 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.927717 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.928674 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.929622 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7czd"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.930666 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.931671 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njtd4"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.932614 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-csf7z"] Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.937931 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.957618 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.978242 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.983597 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743b4687-81ef-4cd4-9910-dc7ba348e457-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.983626 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.983648 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-policies\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984179 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984284 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-policies\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984593 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984710 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-client\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984742 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfrt\" (UniqueName: \"kubernetes.io/projected/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-kube-api-access-4qfrt\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984762 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984782 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-config\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d2a2026b-4828-435f-ba14-941e12d3ea36-machine-approver-tls\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984835 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-auth-proxy-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984873 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984895 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vc6\" (UniqueName: \"kubernetes.io/projected/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-kube-api-access-g9vc6\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984921 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-images\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984937 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984961 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmjq5\" (UniqueName: \"kubernetes.io/projected/d2a2026b-4828-435f-ba14-941e12d3ea36-kube-api-access-zmjq5\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.984978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985002 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317dbe-a59b-4fc6-938d-91a19217745d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985030 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/743b4687-81ef-4cd4-9910-dc7ba348e457-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985058 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-encryption-config\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985083 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24911cb5-f93b-4852-adbc-d06821f34d47-config\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985103 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlpn\" (UniqueName: \"kubernetes.io/projected/1968df11-e45d-47a0-a3bd-5dad31d14c8c-kube-api-access-8hlpn\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985130 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985146 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985227 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-trusted-ca\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985249 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-dir\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985304 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985340 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-service-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985386 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968df11-e45d-47a0-a3bd-5dad31d14c8c-serving-cert\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985423 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnr4\" (UniqueName: \"kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985443 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985462 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-config\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985482 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-serving-cert\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985525 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985544 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317dbe-a59b-4fc6-938d-91a19217745d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985561 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkw6m\" (UniqueName: \"kubernetes.io/projected/296f71e4-2a83-467e-b40a-87b1e40330b9-kube-api-access-zkw6m\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881b9ae-bc95-42f6-8dd3-73a86569dde7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985603 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985633 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24911cb5-f93b-4852-adbc-d06821f34d47-serving-cert\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985662 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743b4687-81ef-4cd4-9910-dc7ba348e457-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985684 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5q48\" (UniqueName: \"kubernetes.io/projected/40317dbe-a59b-4fc6-938d-91a19217745d-kube-api-access-q5q48\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985702 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-config\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985766 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-serving-cert\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985809 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985834 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bxg\" (UniqueName: \"kubernetes.io/projected/2dc37596-5879-4c76-be1b-5c95376cf1f2-kube-api-access-p7bxg\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dc37596-5879-4c76-be1b-5c95376cf1f2-metrics-tls\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985871 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e67cd8-254d-4921-9140-64a80c3d3690-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985889 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e67cd8-254d-4921-9140-64a80c3d3690-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985905 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swl46\" (UniqueName: \"kubernetes.io/projected/51e67cd8-254d-4921-9140-64a80c3d3690-kube-api-access-swl46\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985936 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985937 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985962 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-config\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.985982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0881b9ae-bc95-42f6-8dd3-73a86569dde7-proxy-tls\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986001 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsc6v\" (UniqueName: \"kubernetes.io/projected/0881b9ae-bc95-42f6-8dd3-73a86569dde7-kube-api-access-dsc6v\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986021 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986048 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzjzh\" (UniqueName: \"kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986074 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-client\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhw22\" (UniqueName: \"kubernetes.io/projected/24911cb5-f93b-4852-adbc-d06821f34d47-kube-api-access-bhw22\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.986635 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.987239 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-config\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.987404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-config\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.987664 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.987983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2a2026b-4828-435f-ba14-941e12d3ea36-auth-proxy-config\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.988242 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.988457 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317dbe-a59b-4fc6-938d-91a19217745d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.988829 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.989165 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-config\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.989178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-images\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.989434 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.989510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317dbe-a59b-4fc6-938d-91a19217745d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.989633 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/296f71e4-2a83-467e-b40a-87b1e40330b9-audit-dir\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.990170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.990189 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.990383 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881b9ae-bc95-42f6-8dd3-73a86569dde7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991011 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-serving-cert\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991222 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991256 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-etcd-client\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991235 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991384 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1968df11-e45d-47a0-a3bd-5dad31d14c8c-trusted-ca\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.991916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.992090 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.992342 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.992629 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.992666 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.994094 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1968df11-e45d-47a0-a3bd-5dad31d14c8c-serving-cert\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.994368 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.994418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/296f71e4-2a83-467e-b40a-87b1e40330b9-encryption-config\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.995673 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dc37596-5879-4c76-be1b-5c95376cf1f2-metrics-tls\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.996177 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.996829 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d2a2026b-4828-435f-ba14-941e12d3ea36-machine-approver-tls\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:49 crc kubenswrapper[4754]: I0105 20:05:49.997670 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.018603 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.039552 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.057675 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.057943 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-serving-cert\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.063699 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-client\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.079598 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.082246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-config\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.097924 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.100456 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.117616 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.128041 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-etcd-service-ca\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.138271 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.158333 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.178111 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.197771 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.218097 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.237143 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.258614 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.276916 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.297660 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.317687 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.337999 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.358211 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.362611 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51e67cd8-254d-4921-9140-64a80c3d3690-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.377987 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.399887 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.410315 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e67cd8-254d-4921-9140-64a80c3d3690-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.418600 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.438575 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.459038 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.479125 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.486818 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24911cb5-f93b-4852-adbc-d06821f34d47-config\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.526184 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.526183 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.538566 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.558058 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.563616 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24911cb5-f93b-4852-adbc-d06821f34d47-serving-cert\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.582532 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.597275 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.617424 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.640799 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.659986 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.678030 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.706316 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.718382 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.725079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743b4687-81ef-4cd4-9910-dc7ba348e457-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.737589 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.757594 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.776313 4754 request.go:700] Waited for 1.003759526s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.779508 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.798445 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.805158 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743b4687-81ef-4cd4-9910-dc7ba348e457-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.819503 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.838829 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.858889 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.879422 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.900896 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.908344 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.917869 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.938515 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.949333 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.958973 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.980439 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.993077 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0881b9ae-bc95-42f6-8dd3-73a86569dde7-proxy-tls\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:50 crc kubenswrapper[4754]: I0105 20:05:50.997744 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.017495 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.060175 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.078193 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.117525 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.137933 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.158532 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.183391 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.198522 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.218151 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.238354 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.257632 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.277761 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.298517 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.317572 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.339854 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.358751 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.377429 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.398122 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.417180 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.436885 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.458224 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.478211 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.498607 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.539966 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.576058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9j4\" (UniqueName: \"kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4\") pod \"route-controller-manager-6576b87f9c-8jjnh\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.583639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnbkj\" (UniqueName: \"kubernetes.io/projected/41bc6c99-ab80-4d88-841e-7472bf4ace8c-kube-api-access-dnbkj\") pod \"cluster-image-registry-operator-dc59b4c8b-qnvhg\" (UID: \"41bc6c99-ab80-4d88-841e-7472bf4ace8c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.587925 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.587933 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.587966 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.587993 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.592196 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwpn4\" (UniqueName: \"kubernetes.io/projected/3b6b3476-9191-4367-9437-ee9db002d523-kube-api-access-nwpn4\") pod \"cluster-samples-operator-665b6dd947-99q7g\" (UID: \"3b6b3476-9191-4367-9437-ee9db002d523\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.603265 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.615570 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lhn\" (UniqueName: \"kubernetes.io/projected/3eb6b845-fab0-4359-87bd-17a33f9e78ca-kube-api-access-q7lhn\") pod \"authentication-operator-69f744f599-hz59l\" (UID: \"3eb6b845-fab0-4359-87bd-17a33f9e78ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.622859 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.634521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxn7v\" (UniqueName: \"kubernetes.io/projected/d43638d3-9f97-4bc9-a40a-57280a8ed643-kube-api-access-sxn7v\") pod \"openshift-apiserver-operator-796bbdcf4f-zl4mz\" (UID: \"d43638d3-9f97-4bc9-a40a-57280a8ed643\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.652531 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7wgc\" (UniqueName: \"kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc\") pod \"controller-manager-879f6c89f-srhjf\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.685364 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8tp\" (UniqueName: \"kubernetes.io/projected/9f282a77-59b3-4b2c-8c62-a2526d2a77b5-kube-api-access-5s8tp\") pod \"openshift-config-operator-7777fb866f-pjzck\" (UID: \"9f282a77-59b3-4b2c-8c62-a2526d2a77b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.696094 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5pm\" (UniqueName: \"kubernetes.io/projected/138601a3-19fd-44e5-b817-49b048fe3e88-kube-api-access-cm5pm\") pod \"downloads-7954f5f757-q6l9h\" (UID: \"138601a3-19fd-44e5-b817-49b048fe3e88\") " pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.697896 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.717730 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.738061 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.757748 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.769744 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.777703 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.787927 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.796074 4754 request.go:700] Waited for 1.89050701s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.798012 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.798161 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.818235 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.837377 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.837796 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.857548 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.874792 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.885354 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg"] Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.885920 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 20:05:51 crc kubenswrapper[4754]: W0105 20:05:51.901412 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41bc6c99_ab80_4d88_841e_7472bf4ace8c.slice/crio-1d6e81863732a9bc4b2f7d497da387056738403cd0235f900f1a26768b78a13b WatchSource:0}: Error finding container 1d6e81863732a9bc4b2f7d497da387056738403cd0235f900f1a26768b78a13b: Status 404 returned error can't find the container with id 1d6e81863732a9bc4b2f7d497da387056738403cd0235f900f1a26768b78a13b Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.901654 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.911416 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.913091 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" event={"ID":"41bc6c99-ab80-4d88-841e-7472bf4ace8c","Type":"ContainerStarted","Data":"1d6e81863732a9bc4b2f7d497da387056738403cd0235f900f1a26768b78a13b"} Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.920502 4754 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.935584 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.938317 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.977044 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfrt\" (UniqueName: \"kubernetes.io/projected/e06065f7-b8b3-4c3e-820c-f0051f3a6f6d-kube-api-access-4qfrt\") pod \"machine-api-operator-5694c8668f-xnn4d\" (UID: \"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.992198 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" Jan 05 20:05:51 crc kubenswrapper[4754]: I0105 20:05:51.998506 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/743b4687-81ef-4cd4-9910-dc7ba348e457-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v7wtk\" (UID: \"743b4687-81ef-4cd4-9910-dc7ba348e457\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.021554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlpn\" (UniqueName: \"kubernetes.io/projected/1968df11-e45d-47a0-a3bd-5dad31d14c8c-kube-api-access-8hlpn\") pod \"console-operator-58897d9998-bxhwf\" (UID: \"1968df11-e45d-47a0-a3bd-5dad31d14c8c\") " pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.034536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhw22\" (UniqueName: \"kubernetes.io/projected/24911cb5-f93b-4852-adbc-d06821f34d47-kube-api-access-bhw22\") pod \"service-ca-operator-777779d784-dx842\" (UID: \"24911cb5-f93b-4852-adbc-d06821f34d47\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.042408 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.053334 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6ad6fb-b69a-465f-8cb6-797f1c097dcd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r827v\" (UID: \"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.057175 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.069576 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.078676 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bxg\" (UniqueName: \"kubernetes.io/projected/2dc37596-5879-4c76-be1b-5c95376cf1f2-kube-api-access-p7bxg\") pod \"dns-operator-744455d44c-w7nk2\" (UID: \"2dc37596-5879-4c76-be1b-5c95376cf1f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.085453 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.088901 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.092165 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vc6\" (UniqueName: \"kubernetes.io/projected/5d70b68b-24e2-45e2-91ca-8e1b88805b1a-kube-api-access-g9vc6\") pod \"etcd-operator-b45778765-4rsd7\" (UID: \"5d70b68b-24e2-45e2-91ca-8e1b88805b1a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.125768 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swl46\" (UniqueName: \"kubernetes.io/projected/51e67cd8-254d-4921-9140-64a80c3d3690-kube-api-access-swl46\") pod \"kube-storage-version-migrator-operator-b67b599dd-fsmbh\" (UID: \"51e67cd8-254d-4921-9140-64a80c3d3690\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.129865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.145860 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5q48\" (UniqueName: \"kubernetes.io/projected/40317dbe-a59b-4fc6-938d-91a19217745d-kube-api-access-q5q48\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzs5t\" (UID: \"40317dbe-a59b-4fc6-938d-91a19217745d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.161515 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.170860 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmjq5\" (UniqueName: \"kubernetes.io/projected/d2a2026b-4828-435f-ba14-941e12d3ea36-kube-api-access-zmjq5\") pod \"machine-approver-56656f9798-2qc8v\" (UID: \"d2a2026b-4828-435f-ba14-941e12d3ea36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.183664 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.190513 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkw6m\" (UniqueName: \"kubernetes.io/projected/296f71e4-2a83-467e-b40a-87b1e40330b9-kube-api-access-zkw6m\") pod \"apiserver-7bbb656c7d-ql695\" (UID: \"296f71e4-2a83-467e-b40a-87b1e40330b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.198228 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hz59l"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.204210 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnr4\" (UniqueName: \"kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4\") pod \"collect-profiles-29460720-k78dc\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.204731 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pjzck"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.221606 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsc6v\" (UniqueName: \"kubernetes.io/projected/0881b9ae-bc95-42f6-8dd3-73a86569dde7-kube-api-access-dsc6v\") pod \"machine-config-controller-84d6567774-mhcqd\" (UID: \"0881b9ae-bc95-42f6-8dd3-73a86569dde7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.239446 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzjzh\" (UniqueName: \"kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh\") pod \"oauth-openshift-558db77b4-nzhrk\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.279263 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.281493 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q6l9h"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.297083 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.305265 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xnn4d"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.318832 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.337911 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.339094 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.358924 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.360681 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bxhwf"] Jan 05 20:05:52 crc kubenswrapper[4754]: I0105 20:05:52.377882 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.619389 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.619551 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.620729 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.621542 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.621895 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.622828 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.622998 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.623760 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.633520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.633652 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.633695 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.633827 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.633885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.637115 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.637212 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.637282 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brsmv\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.649403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.650782 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1-metrics-certs\") pod \"network-metrics-daemon-9vzl4\" (UID: \"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1\") " pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.650826 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.150797384 +0000 UTC m=+40.859981288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.670001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" event={"ID":"d43638d3-9f97-4bc9-a40a-57280a8ed643","Type":"ContainerStarted","Data":"4764ebacafc911f220d3465881b393a639df77165d6da92961bbb55c4fe56d22"} Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.670041 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" event={"ID":"fd7afa25-8cba-4be3-a6d7-1b30d7adf834","Type":"ContainerStarted","Data":"60d8b3d590f3e087162549fea5b822a6a97fd2e7735ea7da1b5c3dbb533e7534"} Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.670055 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" event={"ID":"b2c91790-e7dd-4391-a68b-f5a4a052ca72","Type":"ContainerStarted","Data":"ee33f65ea8883ec29f42ab26a27863a837bfff15ef579edd659d588d6958f1ee"} Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.709667 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9vzl4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751590 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751754 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6706045f-88d8-4afd-867a-d0560b8fb9e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751871 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751903 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751923 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751943 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-registration-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.751968 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e833c756-585b-44de-8ab0-ce6e72970539-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752001 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752062 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752121 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f06b2b1f-1c15-47e8-a043-be84aa593218-proxy-tls\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752166 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-node-bootstrap-token\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752212 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9gk\" (UniqueName: \"kubernetes.io/projected/1c252969-10b1-47d5-afef-b58cb4895766-kube-api-access-2x9gk\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752232 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-plugins-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752266 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6f9x\" (UniqueName: \"kubernetes.io/projected/6706045f-88d8-4afd-867a-d0560b8fb9e0-kube-api-access-g6f9x\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752401 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2920de00-a02e-425a-b3ae-2a7056eff257-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-cabundle\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752502 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzkv\" (UniqueName: \"kubernetes.io/projected/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-kube-api-access-pkzkv\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752531 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84640629-f797-4dea-bd98-e5331b7dca5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752568 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752588 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632ecc3b-8601-4d41-a4f4-8940db9dedae-config-volume\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752622 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752645 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920de00-a02e-425a-b3ae-2a7056eff257-config\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752665 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-etcd-serving-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.752699 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.252677974 +0000 UTC m=+40.961861858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752758 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-metrics-certs\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-tmpfs\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7wt\" (UniqueName: \"kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752842 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-node-pullsecrets\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752866 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7sj\" (UniqueName: \"kubernetes.io/projected/ce973cd6-320d-45f5-a09a-037560783218-kube-api-access-kv7sj\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752918 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752939 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-webhook-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752962 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqbtb\" (UniqueName: \"kubernetes.io/projected/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-kube-api-access-gqbtb\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.752983 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgz29\" (UniqueName: \"kubernetes.io/projected/986880f2-b964-4762-baa1-3536a9ff36e1-kube-api-access-hgz29\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753007 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f82gs\" (UniqueName: \"kubernetes.io/projected/e833c756-585b-44de-8ab0-ce6e72970539-kube-api-access-f82gs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753083 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753105 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/632ecc3b-8601-4d41-a4f4-8940db9dedae-metrics-tls\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753140 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-etcd-client\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753190 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4gw\" (UniqueName: \"kubernetes.io/projected/632ecc3b-8601-4d41-a4f4-8940db9dedae-kube-api-access-dn4gw\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753227 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-socket-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753252 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753275 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pt8w\" (UniqueName: \"kubernetes.io/projected/8cdc6a43-0c41-4fe2-939b-805550abecd2-kube-api-access-9pt8w\") pod \"migrator-59844c95c7-5gjwq\" (UID: \"8cdc6a43-0c41-4fe2-939b-805550abecd2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753318 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6cc\" (UniqueName: \"kubernetes.io/projected/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-kube-api-access-9w6cc\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753342 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hsz\" (UniqueName: \"kubernetes.io/projected/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-kube-api-access-b2hsz\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-apiservice-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753498 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-srv-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753654 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/16642695-5006-4f7d-829c-becc9345dd6e-kube-api-access-mlhn2\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753744 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753895 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.753927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-certs\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754023 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754113 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-audit\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754150 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49d6\" (UniqueName: \"kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754221 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69llf\" (UniqueName: \"kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754253 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnzm\" (UniqueName: \"kubernetes.io/projected/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-kube-api-access-hsnzm\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754275 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xndzn\" (UniqueName: \"kubernetes.io/projected/03f29d3f-9221-484d-aa70-8889d57f7de1-kube-api-access-xndzn\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754330 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754750 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.754908 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.254898071 +0000 UTC m=+40.964081935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920de00-a02e-425a-b3ae-2a7056eff257-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754962 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-images\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.754990 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-stats-auth\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.755017 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-image-import-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.755041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-key\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.755072 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-audit-dir\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.755097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-srv-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.755639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760582 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-csi-data-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760670 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16642695-5006-4f7d-829c-becc9345dd6e-service-ca-bundle\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760689 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-encryption-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760707 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84640629-f797-4dea-bd98-e5331b7dca5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760732 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrc7j\" (UniqueName: \"kubernetes.io/projected/f06b2b1f-1c15-47e8-a043-be84aa593218-kube-api-access-wrc7j\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760777 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760796 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brsmv\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760816 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760949 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760968 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.760988 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-default-certificate\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.761006 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.761156 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/986880f2-b964-4762-baa1-3536a9ff36e1-cert\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.761235 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm4v\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-kube-api-access-fcm4v\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.762800 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.762845 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-serving-cert\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.762935 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-mountpoint-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.765830 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.767922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.775084 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.784158 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.787340 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brsmv\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.864414 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69llf\" (UniqueName: \"kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865844 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsnzm\" (UniqueName: \"kubernetes.io/projected/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-kube-api-access-hsnzm\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865863 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xndzn\" (UniqueName: \"kubernetes.io/projected/03f29d3f-9221-484d-aa70-8889d57f7de1-kube-api-access-xndzn\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865887 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865904 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920de00-a02e-425a-b3ae-2a7056eff257-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865923 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-images\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-stats-auth\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865969 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-image-import-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.865983 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-key\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-audit-dir\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866145 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-srv-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866163 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-csi-data-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16642695-5006-4f7d-829c-becc9345dd6e-service-ca-bundle\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866219 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-encryption-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.866236 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84640629-f797-4dea-bd98-e5331b7dca5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.868006 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.367989843 +0000 UTC m=+41.077173717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.867272 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrc7j\" (UniqueName: \"kubernetes.io/projected/f06b2b1f-1c15-47e8-a043-be84aa593218-kube-api-access-wrc7j\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870261 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-default-certificate\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870473 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870502 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/986880f2-b964-4762-baa1-3536a9ff36e1-cert\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870641 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm4v\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-kube-api-access-fcm4v\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870679 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871022 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-key\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2920de00-a02e-425a-b3ae-2a7056eff257-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.870697 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-serving-cert\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871226 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-mountpoint-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871299 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6706045f-88d8-4afd-867a-d0560b8fb9e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871317 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871333 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-registration-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871754 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871794 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e833c756-585b-44de-8ab0-ce6e72970539-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871817 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871866 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f06b2b1f-1c15-47e8-a043-be84aa593218-proxy-tls\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871891 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-node-bootstrap-token\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871919 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x9gk\" (UniqueName: \"kubernetes.io/projected/1c252969-10b1-47d5-afef-b58cb4895766-kube-api-access-2x9gk\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871947 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-plugins-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871976 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6f9x\" (UniqueName: \"kubernetes.io/projected/6706045f-88d8-4afd-867a-d0560b8fb9e0-kube-api-access-g6f9x\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.871999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872019 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2920de00-a02e-425a-b3ae-2a7056eff257-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872037 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-cabundle\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872055 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzkv\" (UniqueName: \"kubernetes.io/projected/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-kube-api-access-pkzkv\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84640629-f797-4dea-bd98-e5331b7dca5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872080 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-image-import-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872098 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632ecc3b-8601-4d41-a4f4-8940db9dedae-config-volume\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920de00-a02e-425a-b3ae-2a7056eff257-config\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-etcd-serving-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872267 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-metrics-certs\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-csi-data-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872657 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-audit-dir\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872741 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/632ecc3b-8601-4d41-a4f4-8940db9dedae-config-volume\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.872827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.873012 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-images\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.873274 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.873377 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2920de00-a02e-425a-b3ae-2a7056eff257-config\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.873443 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.874393 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-tmpfs\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.874455 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7sj\" (UniqueName: \"kubernetes.io/projected/ce973cd6-320d-45f5-a09a-037560783218-kube-api-access-kv7sj\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.874489 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7wt\" (UniqueName: \"kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.874513 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-node-pullsecrets\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875323 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16642695-5006-4f7d-829c-becc9345dd6e-service-ca-bundle\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875324 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-srv-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.874536 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqbtb\" (UniqueName: \"kubernetes.io/projected/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-kube-api-access-gqbtb\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875734 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgz29\" (UniqueName: \"kubernetes.io/projected/986880f2-b964-4762-baa1-3536a9ff36e1-kube-api-access-hgz29\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-webhook-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875830 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f82gs\" (UniqueName: \"kubernetes.io/projected/e833c756-585b-44de-8ab0-ce6e72970539-kube-api-access-f82gs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875880 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875905 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/632ecc3b-8601-4d41-a4f4-8940db9dedae-metrics-tls\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875931 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-etcd-client\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875949 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.875956 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4gw\" (UniqueName: \"kubernetes.io/projected/632ecc3b-8601-4d41-a4f4-8940db9dedae-kube-api-access-dn4gw\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.887913 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-socket-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.887969 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6cc\" (UniqueName: \"kubernetes.io/projected/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-kube-api-access-9w6cc\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888008 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hsz\" (UniqueName: \"kubernetes.io/projected/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-kube-api-access-b2hsz\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888035 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888061 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pt8w\" (UniqueName: \"kubernetes.io/projected/8cdc6a43-0c41-4fe2-939b-805550abecd2-kube-api-access-9pt8w\") pod \"migrator-59844c95c7-5gjwq\" (UID: \"8cdc6a43-0c41-4fe2-939b-805550abecd2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-apiservice-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888125 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-srv-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888151 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/16642695-5006-4f7d-829c-becc9345dd6e-kube-api-access-mlhn2\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-certs\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888272 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888338 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-audit\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888358 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x49d6\" (UniqueName: \"kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e833c756-585b-44de-8ab0-ce6e72970539-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.891634 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-signing-cabundle\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.891865 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ce973cd6-320d-45f5-a09a-037560783218-node-pullsecrets\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.891908 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-tmpfs\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.892157 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888968 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-socket-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.892337 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.894843 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-audit\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.896257 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.898939 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-apiservice-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.899368 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.399341278 +0000 UTC m=+41.108525152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.900338 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-node-bootstrap-token\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.901046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.901547 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-srv-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.903875 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-etcd-client\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.903999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-certs\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.904434 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-stats-auth\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.904849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.905178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-encryption-config\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.907092 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.907129 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.907474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.908072 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-registration-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.908280 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.908909 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.908940 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-plugins-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.911797 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84640629-f797-4dea-bd98-e5331b7dca5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.912226 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f06b2b1f-1c15-47e8-a043-be84aa593218-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.888994 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ce973cd6-320d-45f5-a09a-037560783218-etcd-serving-ca\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.914596 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/03f29d3f-9221-484d-aa70-8889d57f7de1-mountpoint-dir\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.916216 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/986880f2-b964-4762-baa1-3536a9ff36e1-cert\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.916902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.922745 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84640629-f797-4dea-bd98-e5331b7dca5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.923208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-webhook-cert\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.924111 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/632ecc3b-8601-4d41-a4f4-8940db9dedae-metrics-tls\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.931032 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh"] Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.938984 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c252969-10b1-47d5-afef-b58cb4895766-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.939687 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqbtb\" (UniqueName: \"kubernetes.io/projected/d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9-kube-api-access-gqbtb\") pod \"service-ca-9c57cc56f-csf7z\" (UID: \"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9\") " pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.940516 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69llf\" (UniqueName: \"kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf\") pod \"marketplace-operator-79b997595-57nsw\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.940881 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce973cd6-320d-45f5-a09a-037560783218-serving-cert\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.941605 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsnzm\" (UniqueName: \"kubernetes.io/projected/3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2-kube-api-access-hsnzm\") pod \"control-plane-machine-set-operator-78cbb6b69f-v7gsb\" (UID: \"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.948787 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-default-certificate\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.949511 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f06b2b1f-1c15-47e8-a043-be84aa593218-proxy-tls\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.949772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6706045f-88d8-4afd-867a-d0560b8fb9e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.950637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f82gs\" (UniqueName: \"kubernetes.io/projected/e833c756-585b-44de-8ab0-ce6e72970539-kube-api-access-f82gs\") pod \"multus-admission-controller-857f4d67dd-7rbg4\" (UID: \"e833c756-585b-44de-8ab0-ce6e72970539\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.950904 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x49d6\" (UniqueName: \"kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6\") pod \"cni-sysctl-allowlist-ds-rqlrs\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.951658 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6f9x\" (UniqueName: \"kubernetes.io/projected/6706045f-88d8-4afd-867a-d0560b8fb9e0-kube-api-access-g6f9x\") pod \"package-server-manager-789f6589d5-mzj9s\" (UID: \"6706045f-88d8-4afd-867a-d0560b8fb9e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.956269 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x9gk\" (UniqueName: \"kubernetes.io/projected/1c252969-10b1-47d5-afef-b58cb4895766-kube-api-access-2x9gk\") pod \"olm-operator-6b444d44fb-lcnw9\" (UID: \"1c252969-10b1-47d5-afef-b58cb4895766\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.957079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrc7j\" (UniqueName: \"kubernetes.io/projected/f06b2b1f-1c15-47e8-a043-be84aa593218-kube-api-access-wrc7j\") pod \"machine-config-operator-74547568cd-s26xc\" (UID: \"f06b2b1f-1c15-47e8-a043-be84aa593218\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.961564 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7sj\" (UniqueName: \"kubernetes.io/projected/ce973cd6-320d-45f5-a09a-037560783218-kube-api-access-kv7sj\") pod \"apiserver-76f77b778f-pxqjg\" (UID: \"ce973cd6-320d-45f5-a09a-037560783218\") " pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.969152 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2hsz\" (UniqueName: \"kubernetes.io/projected/3e55c0bf-988b-4b2e-b44b-2343b48ff9f8-kube-api-access-b2hsz\") pod \"catalog-operator-68c6474976-5zkp6\" (UID: \"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.972324 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16642695-5006-4f7d-829c-becc9345dd6e-metrics-certs\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.972774 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.972345 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xndzn\" (UniqueName: \"kubernetes.io/projected/03f29d3f-9221-484d-aa70-8889d57f7de1-kube-api-access-xndzn\") pod \"csi-hostpathplugin-njtd4\" (UID: \"03f29d3f-9221-484d-aa70-8889d57f7de1\") " pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.973363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7wt\" (UniqueName: \"kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt\") pod \"console-f9d7485db-4xgft\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.973442 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6cc\" (UniqueName: \"kubernetes.io/projected/d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0-kube-api-access-9w6cc\") pod \"machine-config-server-5vs9t\" (UID: \"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0\") " pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.973699 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4gw\" (UniqueName: \"kubernetes.io/projected/632ecc3b-8601-4d41-a4f4-8940db9dedae-kube-api-access-dn4gw\") pod \"dns-default-9xwbp\" (UID: \"632ecc3b-8601-4d41-a4f4-8940db9dedae\") " pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.973899 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgz29\" (UniqueName: \"kubernetes.io/projected/986880f2-b964-4762-baa1-3536a9ff36e1-kube-api-access-hgz29\") pod \"ingress-canary-m7czd\" (UID: \"986880f2-b964-4762-baa1-3536a9ff36e1\") " pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.977399 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzkv\" (UniqueName: \"kubernetes.io/projected/f39a32ad-c337-4f04-b2ac-9a55729d7d4c-kube-api-access-pkzkv\") pod \"packageserver-d55dfcdfc-sckmz\" (UID: \"f39a32ad-c337-4f04-b2ac-9a55729d7d4c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.979567 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pt8w\" (UniqueName: \"kubernetes.io/projected/8cdc6a43-0c41-4fe2-939b-805550abecd2-kube-api-access-9pt8w\") pod \"migrator-59844c95c7-5gjwq\" (UID: \"8cdc6a43-0c41-4fe2-939b-805550abecd2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.980577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhn2\" (UniqueName: \"kubernetes.io/projected/16642695-5006-4f7d-829c-becc9345dd6e-kube-api-access-mlhn2\") pod \"router-default-5444994796-vnrxg\" (UID: \"16642695-5006-4f7d-829c-becc9345dd6e\") " pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.988278 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2920de00-a02e-425a-b3ae-2a7056eff257-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-scx4b\" (UID: \"2920de00-a02e-425a-b3ae-2a7056eff257\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.989598 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.990010 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.489979596 +0000 UTC m=+41.199163470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:53 crc kubenswrapper[4754]: I0105 20:05:53.990412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:53 crc kubenswrapper[4754]: E0105 20:05:53.990914 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.490906399 +0000 UTC m=+41.200090273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.011055 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.013659 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm4v\" (UniqueName: \"kubernetes.io/projected/84640629-f797-4dea-bd98-e5331b7dca5f-kube-api-access-fcm4v\") pod \"ingress-operator-5b745b69d9-nmhmx\" (UID: \"84640629-f797-4dea-bd98-e5331b7dca5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.019164 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.023630 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8j8j2" podStartSLOduration=21.023610189 podStartE2EDuration="21.023610189s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:54.016807392 +0000 UTC m=+40.725991266" watchObservedRunningTime="2026-01-05 20:05:54.023610189 +0000 UTC m=+40.732794063" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.027102 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.039859 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.047905 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.050084 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4rsd7"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.055617 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.061097 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.068827 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.076536 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.082042 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.088426 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.091156 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.091423 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.591372552 +0000 UTC m=+41.300556426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.091459 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.092073 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.59206629 +0000 UTC m=+41.301250154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.097622 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9xwbp" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.100606 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m7czd" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.109320 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dx842"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.109705 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5vs9t" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.113301 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w7nk2"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.119900 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.130382 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.193040 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.193500 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.693473347 +0000 UTC m=+41.402657221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.193748 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.194386 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.694369361 +0000 UTC m=+41.403553235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.224075 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.242375 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.274029 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.294950 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.295176 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.795128971 +0000 UTC m=+41.504312845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.295335 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.295581 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.795570313 +0000 UTC m=+41.504754187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.304010 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.369853 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.371678 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9vzl4"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.396475 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.398110 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.897623577 +0000 UTC m=+41.606807581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.416490 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.431360 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:05:54 crc kubenswrapper[4754]: W0105 20:05:54.453243 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d70b68b_24e2_45e2_91ca_8e1b88805b1a.slice/crio-cbeb69b382181e40742b9d0dd3735d9a4bb323c7d1149a409055daf7a793a70c WatchSource:0}: Error finding container cbeb69b382181e40742b9d0dd3735d9a4bb323c7d1149a409055daf7a793a70c: Status 404 returned error can't find the container with id cbeb69b382181e40742b9d0dd3735d9a4bb323c7d1149a409055daf7a793a70c Jan 05 20:05:54 crc kubenswrapper[4754]: W0105 20:05:54.458115 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dc37596_5879_4c76_be1b_5c95376cf1f2.slice/crio-94b5ff23d8fff550dd6eb6b15936f27e485ec20158ba7b8bbfb273fcaea49ce5 WatchSource:0}: Error finding container 94b5ff23d8fff550dd6eb6b15936f27e485ec20158ba7b8bbfb273fcaea49ce5: Status 404 returned error can't find the container with id 94b5ff23d8fff550dd6eb6b15936f27e485ec20158ba7b8bbfb273fcaea49ce5 Jan 05 20:05:54 crc kubenswrapper[4754]: W0105 20:05:54.461424 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd4959ff_bb15_47bd_ae6c_1b0e53fc84f1.slice/crio-d915d9fd06b8012833d0de4baecfa6c855bd31b28f9deb1309679cdb71d70485 WatchSource:0}: Error finding container d915d9fd06b8012833d0de4baecfa6c855bd31b28f9deb1309679cdb71d70485: Status 404 returned error can't find the container with id d915d9fd06b8012833d0de4baecfa6c855bd31b28f9deb1309679cdb71d70485 Jan 05 20:05:54 crc kubenswrapper[4754]: W0105 20:05:54.462717 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod743b4687_81ef_4cd4_9910_dc7ba348e457.slice/crio-b78467dbeabe7c86b682f85a2a1dc8b67c3853a42628219b51828d1ce0e183a7 WatchSource:0}: Error finding container b78467dbeabe7c86b682f85a2a1dc8b67c3853a42628219b51828d1ce0e183a7: Status 404 returned error can't find the container with id b78467dbeabe7c86b682f85a2a1dc8b67c3853a42628219b51828d1ce0e183a7 Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.498864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.499601 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:54.999589269 +0000 UTC m=+41.708773143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.525141 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.527613 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.529358 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.599964 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.600142 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.100118184 +0000 UTC m=+41.809302078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.600307 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.600701 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.100691489 +0000 UTC m=+41.809875373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: W0105 20:05:54.618818 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod296f71e4_2a83_467e_b40a_87b1e40330b9.slice/crio-3f91f0cfecd4b7412f707a3024fe51bfbd15dbda09636feeb6687e258abb612d WatchSource:0}: Error finding container 3f91f0cfecd4b7412f707a3024fe51bfbd15dbda09636feeb6687e258abb612d: Status 404 returned error can't find the container with id 3f91f0cfecd4b7412f707a3024fe51bfbd15dbda09636feeb6687e258abb612d Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.648969 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" event={"ID":"701a029a-d767-4681-8bf3-dffdc73e93f5","Type":"ContainerStarted","Data":"065fb1a70dd2387066189414d60c36579d974d98ca4a9682e8c82decea3ee09e"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.650502 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" event={"ID":"51e67cd8-254d-4921-9140-64a80c3d3690","Type":"ContainerStarted","Data":"bbe92dc0b1f97a58567015b3b55edaacee47971a48779282575b2780792c50d0"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.657645 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" event={"ID":"0881b9ae-bc95-42f6-8dd3-73a86569dde7","Type":"ContainerStarted","Data":"94d0394311d5be2da129ffd03404fb22facba538b4e008df3b61694aa4fe7365"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.658790 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" event={"ID":"296f71e4-2a83-467e-b40a-87b1e40330b9","Type":"ContainerStarted","Data":"3f91f0cfecd4b7412f707a3024fe51bfbd15dbda09636feeb6687e258abb612d"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.663065 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" event={"ID":"24911cb5-f93b-4852-adbc-d06821f34d47","Type":"ContainerStarted","Data":"6321d1a8ad678a3572eea7200cfd02f26dedb99d339faa5a200cc224c150b848"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.670134 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" event={"ID":"d2a2026b-4828-435f-ba14-941e12d3ea36","Type":"ContainerStarted","Data":"30ca013faf18f9cf44fe724cf3edd3a4cca5992ba2a6b3b2c36bb096bdab12a9"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.673518 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" event={"ID":"5d70b68b-24e2-45e2-91ca-8e1b88805b1a","Type":"ContainerStarted","Data":"cbeb69b382181e40742b9d0dd3735d9a4bb323c7d1149a409055daf7a793a70c"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.674769 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" event={"ID":"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd","Type":"ContainerStarted","Data":"d7cb0f6009d0c9fa7fbbcc8a98b0e1afb4db048cb6fd4ae12ee35decf2ed25a2"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.675496 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9vzl4" event={"ID":"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1","Type":"ContainerStarted","Data":"d915d9fd06b8012833d0de4baecfa6c855bd31b28f9deb1309679cdb71d70485"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.676279 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" event={"ID":"2dc37596-5879-4c76-be1b-5c95376cf1f2","Type":"ContainerStarted","Data":"94b5ff23d8fff550dd6eb6b15936f27e485ec20158ba7b8bbfb273fcaea49ce5"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.676912 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" event={"ID":"743b4687-81ef-4cd4-9910-dc7ba348e457","Type":"ContainerStarted","Data":"b78467dbeabe7c86b682f85a2a1dc8b67c3853a42628219b51828d1ce0e183a7"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.678218 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" event={"ID":"b2c91790-e7dd-4391-a68b-f5a4a052ca72","Type":"ContainerStarted","Data":"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.678849 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" event={"ID":"40317dbe-a59b-4fc6-938d-91a19217745d","Type":"ContainerStarted","Data":"8fad9fe99c112493cafeb2782b31ccd6de5847fc5a96a1ca919b02c5e5416d26"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.679440 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" event={"ID":"9f282a77-59b3-4b2c-8c62-a2526d2a77b5","Type":"ContainerStarted","Data":"ae025327b3462915a7e7b7e8eee3eac4cade465ba2df76842ba5d67a8d0391af"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.680013 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" event={"ID":"7e4b1718-5732-4498-b355-25832e158871","Type":"ContainerStarted","Data":"20a3b710e97f682be10a8ac0025099c5fc8d4cf5b5c3993873a19e7bee90f763"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.680905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q6l9h" event={"ID":"138601a3-19fd-44e5-b817-49b048fe3e88","Type":"ContainerStarted","Data":"882322618e9906225d488754e571a882da93c2f2e6f10bbe5ebcf3c387734240"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.682608 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" event={"ID":"3eb6b845-fab0-4359-87bd-17a33f9e78ca","Type":"ContainerStarted","Data":"fcaba24fb44d1ecba65a2db11c94dde06fe6054c4265e788467c87ec15e3ba15"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.685136 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" event={"ID":"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d","Type":"ContainerStarted","Data":"888038409f8c54fd3a2766503f993f355cd1e7daab42769c3aad9e1479f717ff"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.687199 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" event={"ID":"1968df11-e45d-47a0-a3bd-5dad31d14c8c","Type":"ContainerStarted","Data":"8a02d1c65629721249055d97bbcd370c02cc94f5cf69930b7297cbdfe3d500ea"} Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.701859 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.702565 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.202550228 +0000 UTC m=+41.911734102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.757009 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.806318 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.806917 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.306895922 +0000 UTC m=+42.016079876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.858904 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb"] Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.912137 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.912384 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.412337875 +0000 UTC m=+42.121521749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.912859 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:54 crc kubenswrapper[4754]: E0105 20:05:54.913264 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.413247958 +0000 UTC m=+42.122431892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:54 crc kubenswrapper[4754]: I0105 20:05:54.998769 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7rbg4"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.014702 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.014834 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.51481587 +0000 UTC m=+42.223999744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.015007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.015401 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.515389605 +0000 UTC m=+42.224573479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: W0105 20:05:55.107738 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16642695_5006_4f7d_829c_becc9345dd6e.slice/crio-ce3042d6f074631803111a320198af821ccdfbb6b16a6745b15dbc88d4e40a8b WatchSource:0}: Error finding container ce3042d6f074631803111a320198af821ccdfbb6b16a6745b15dbc88d4e40a8b: Status 404 returned error can't find the container with id ce3042d6f074631803111a320198af821ccdfbb6b16a6745b15dbc88d4e40a8b Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.118032 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.120494 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.620473628 +0000 UTC m=+42.329657502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.120784 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.121092 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.621085334 +0000 UTC m=+42.330269208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: W0105 20:05:55.154126 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode833c756_585b_44de_8ab0_ce6e72970539.slice/crio-34449b3f929a7bd30e0dae3297b5ff7eca11fba29d6089616eabd425c04a205c WatchSource:0}: Error finding container 34449b3f929a7bd30e0dae3297b5ff7eca11fba29d6089616eabd425c04a205c: Status 404 returned error can't find the container with id 34449b3f929a7bd30e0dae3297b5ff7eca11fba29d6089616eabd425c04a205c Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.192343 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njtd4"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.222104 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.222554 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.722540263 +0000 UTC m=+42.431724137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.272971 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.292173 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.324872 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-csf7z"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.326628 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.326929 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.826915967 +0000 UTC m=+42.536099841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.376779 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.394249 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.419992 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pxqjg"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.436434 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.436786 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:55.936768184 +0000 UTC m=+42.645952058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.547188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.547490 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.047476584 +0000 UTC m=+42.756660458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.649366 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.649522 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.149497007 +0000 UTC m=+42.858680881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.649646 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.649942 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.149930229 +0000 UTC m=+42.859114103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.705366 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc"] Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.745215 4754 generic.go:334] "Generic (PLEG): container finished" podID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerID="b83209faf8a3b83ebfee1c6f41e4cee1abca79ab178c3e672c1e74f67fcdfdd0" exitCode=0 Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.745351 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" event={"ID":"9f282a77-59b3-4b2c-8c62-a2526d2a77b5","Type":"ContainerDied","Data":"b83209faf8a3b83ebfee1c6f41e4cee1abca79ab178c3e672c1e74f67fcdfdd0"} Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.750821 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.751250 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.251233584 +0000 UTC m=+42.960417458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.776499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5vs9t" event={"ID":"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0","Type":"ContainerStarted","Data":"df52ad8b436c92e9d3e36d6e5a5e951bf3a46add47b8b2b65ba61e47a9132c0f"} Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.854354 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.857652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" event={"ID":"41bc6c99-ab80-4d88-841e-7472bf4ace8c","Type":"ContainerStarted","Data":"32a7b21edb8627458e78337af46e4cdfdcb5eefd717502a8a44c694d30d5dd81"} Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.859017 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.359003686 +0000 UTC m=+43.068187550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.883641 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qnvhg" podStartSLOduration=21.883622676999998 podStartE2EDuration="21.883622677s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:55.881606344 +0000 UTC m=+42.590790228" watchObservedRunningTime="2026-01-05 20:05:55.883622677 +0000 UTC m=+42.592806551" Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.898382 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" event={"ID":"fd7afa25-8cba-4be3-a6d7-1b30d7adf834","Type":"ContainerStarted","Data":"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32"} Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.899665 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.942265 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" event={"ID":"1968df11-e45d-47a0-a3bd-5dad31d14c8c","Type":"ContainerStarted","Data":"557ef87cfb0a07fc6ba24382673ffc2aaf295574634e7b60c6ffb4fab6022fad"} Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.955169 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.955120 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.455106256 +0000 UTC m=+43.164290130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.955062 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.955565 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:55 crc kubenswrapper[4754]: E0105 20:05:55.955841 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.455833355 +0000 UTC m=+43.165017229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:55 crc kubenswrapper[4754]: I0105 20:05:55.961952 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" podStartSLOduration=22.961929203 podStartE2EDuration="22.961929203s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:55.930011773 +0000 UTC m=+42.639195647" watchObservedRunningTime="2026-01-05 20:05:55.961929203 +0000 UTC m=+42.671113077" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:55.998985 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" podStartSLOduration=22.998971807 podStartE2EDuration="22.998971807s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:55.997742815 +0000 UTC m=+42.706926689" watchObservedRunningTime="2026-01-05 20:05:55.998971807 +0000 UTC m=+42.708155681" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:55.999892 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.037852 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.056777 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.058129 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.558113635 +0000 UTC m=+43.267297509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.079822 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.101892 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vnrxg" event={"ID":"16642695-5006-4f7d-829c-becc9345dd6e","Type":"ContainerStarted","Data":"ce3042d6f074631803111a320198af821ccdfbb6b16a6745b15dbc88d4e40a8b"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.151211 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.158404 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.161067 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.661054622 +0000 UTC m=+43.370238486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.225499 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.236844 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" event={"ID":"cd6ad6fb-b69a-465f-8cb6-797f1c097dcd","Type":"ContainerStarted","Data":"a15013436699d2bc9481fca813760c0acf737f8e3b4afca5a395bc3cf6a680cb"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.262614 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.262951 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.762936202 +0000 UTC m=+43.472120076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.307639 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q6l9h" event={"ID":"138601a3-19fd-44e5-b817-49b048fe3e88","Type":"ContainerStarted","Data":"979f4e77f939c238a31bf9f17d73edee84046368ce7e5341800827bf90cdcb7d"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.308665 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.312618 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" event={"ID":"459e0439-ea70-4646-9cf7-2029f79e64b2","Type":"ContainerStarted","Data":"c6861fa91756fce93404ff803eb215eceb4bd1341d3ebcde0164bfad3f17e43f"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.313121 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.313174 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.332306 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" event={"ID":"e833c756-585b-44de-8ab0-ce6e72970539","Type":"ContainerStarted","Data":"34449b3f929a7bd30e0dae3297b5ff7eca11fba29d6089616eabd425c04a205c"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.347598 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r827v" podStartSLOduration=22.347580754 podStartE2EDuration="22.347580754s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:56.276416243 +0000 UTC m=+42.985600117" watchObservedRunningTime="2026-01-05 20:05:56.347580754 +0000 UTC m=+43.056764628" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.348157 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-q6l9h" podStartSLOduration=23.348152139 podStartE2EDuration="23.348152139s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:56.346761153 +0000 UTC m=+43.055945037" watchObservedRunningTime="2026-01-05 20:05:56.348152139 +0000 UTC m=+43.057336013" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.357237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" event={"ID":"3eb6b845-fab0-4359-87bd-17a33f9e78ca","Type":"ContainerStarted","Data":"9075e74006a9856b11c9ef602353cd53610df1abf129542e5c8faa69eec22d3e"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.362167 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" event={"ID":"03f29d3f-9221-484d-aa70-8889d57f7de1","Type":"ContainerStarted","Data":"9eb3a1ea63628980ef8719ec77f68599f5f68451b59c6bc82514a240a328aeeb"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.363019 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" event={"ID":"84640629-f797-4dea-bd98-e5331b7dca5f","Type":"ContainerStarted","Data":"7ac3aa5fad562148ca714a62c6f9efd0a1e38eb1cc2dcb4419c7a4233a922f2e"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.363864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.364562 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.864544165 +0000 UTC m=+43.573728039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: W0105 20:05:56.367491 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6706045f_88d8_4afd_867a_d0560b8fb9e0.slice/crio-da2800a0ee2867aa8b42a752bcfc3306163b26212ec59dd9a9d7d3146b2a62be WatchSource:0}: Error finding container da2800a0ee2867aa8b42a752bcfc3306163b26212ec59dd9a9d7d3146b2a62be: Status 404 returned error can't find the container with id da2800a0ee2867aa8b42a752bcfc3306163b26212ec59dd9a9d7d3146b2a62be Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.372659 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" event={"ID":"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d","Type":"ContainerStarted","Data":"3d30cd788fb0a0da484e2a7b45473add92018d725350f50b758269558ad68ed6"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.376846 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" podStartSLOduration=24.376826235 podStartE2EDuration="24.376826235s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:56.375027998 +0000 UTC m=+43.084211872" watchObservedRunningTime="2026-01-05 20:05:56.376826235 +0000 UTC m=+43.086010109" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.378885 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m7czd"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.380338 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.380503 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" event={"ID":"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2","Type":"ContainerStarted","Data":"59ab5cb3295cc068b8ba895e53aad6f271841fa628018cbb2803fd12b484c9bd"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.385828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" event={"ID":"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8","Type":"ContainerStarted","Data":"3a7a3be5c1d97ca5f24a3c6ccf4f0d21d51bd236b507461cbcf31139413a4958"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.388183 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9xwbp"] Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.394569 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" event={"ID":"3b6b3476-9191-4367-9437-ee9db002d523","Type":"ContainerStarted","Data":"d71128d672f434041b2e84d65ba8de6eb3a3e2ae8706dc29fdf3f661cfccd254"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.396807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" event={"ID":"d43638d3-9f97-4bc9-a40a-57280a8ed643","Type":"ContainerStarted","Data":"60446956c7ecf77aa2f233c11877628fd84b5b0ab7785ddb8e08405f494df584"} Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.397154 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.404111 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.414826 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" podStartSLOduration=22.414808492 podStartE2EDuration="22.414808492s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:56.410798268 +0000 UTC m=+43.119982142" watchObservedRunningTime="2026-01-05 20:05:56.414808492 +0000 UTC m=+43.123992366" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.462198 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zl4mz" podStartSLOduration=24.462177334 podStartE2EDuration="24.462177334s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:56.432976535 +0000 UTC m=+43.142160419" watchObservedRunningTime="2026-01-05 20:05:56.462177334 +0000 UTC m=+43.171361218" Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.464645 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.465719 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:56.965702306 +0000 UTC m=+43.674886180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.566451 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.566804 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.066793075 +0000 UTC m=+43.775976949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: W0105 20:05:56.660770 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod986880f2_b964_4762_baa1_3536a9ff36e1.slice/crio-532d523449054593c1ac87b5d782245a9cd3dd3268db437d45cc8bdfda9e3f63 WatchSource:0}: Error finding container 532d523449054593c1ac87b5d782245a9cd3dd3268db437d45cc8bdfda9e3f63: Status 404 returned error can't find the container with id 532d523449054593c1ac87b5d782245a9cd3dd3268db437d45cc8bdfda9e3f63 Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.677405 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.678642 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.178622264 +0000 UTC m=+43.887806138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.779450 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.780100 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.280071323 +0000 UTC m=+43.989255407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.879947 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.880391 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.380376751 +0000 UTC m=+44.089560625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:56 crc kubenswrapper[4754]: I0105 20:05:56.983426 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:56 crc kubenswrapper[4754]: E0105 20:05:56.984379 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.484360296 +0000 UTC m=+44.193544170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.085482 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.085837 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.585822585 +0000 UTC m=+44.295006459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.186568 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.187277 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.687265703 +0000 UTC m=+44.396449577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.289537 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.289950 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.789936284 +0000 UTC m=+44.499120158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.391616 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.392104 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.892085541 +0000 UTC m=+44.601269405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.422263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7czd" event={"ID":"986880f2-b964-4762-baa1-3536a9ff36e1","Type":"ContainerStarted","Data":"532d523449054593c1ac87b5d782245a9cd3dd3268db437d45cc8bdfda9e3f63"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.450415 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" event={"ID":"6706045f-88d8-4afd-867a-d0560b8fb9e0","Type":"ContainerStarted","Data":"da2800a0ee2867aa8b42a752bcfc3306163b26212ec59dd9a9d7d3146b2a62be"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.452375 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vnrxg" event={"ID":"16642695-5006-4f7d-829c-becc9345dd6e","Type":"ContainerStarted","Data":"c723504ce3a7658c9f077b85ef91d375b3d35fb43c71018acac0683199742a93"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.484856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" event={"ID":"51e67cd8-254d-4921-9140-64a80c3d3690","Type":"ContainerStarted","Data":"d90ac7c075e89066cde6b792c13cc3960c6629784cb0d8232edd0a351ae6885a"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.486856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" event={"ID":"3b6b3476-9191-4367-9437-ee9db002d523","Type":"ContainerStarted","Data":"8f8ea289e746512f889c465b82628a2442196e7c2480f762f4b1cfdc8f7d093a"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.488806 4754 generic.go:334] "Generic (PLEG): container finished" podID="296f71e4-2a83-467e-b40a-87b1e40330b9" containerID="0fe782c41accf0d85cc605c6d3ff485341bbfe32361a65e0b6d45ca40e6bd447" exitCode=0 Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.490914 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" event={"ID":"296f71e4-2a83-467e-b40a-87b1e40330b9","Type":"ContainerDied","Data":"0fe782c41accf0d85cc605c6d3ff485341bbfe32361a65e0b6d45ca40e6bd447"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.492551 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.492845 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:57.992831631 +0000 UTC m=+44.702015505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.499480 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vnrxg" podStartSLOduration=23.499464743 podStartE2EDuration="23.499464743s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.49893582 +0000 UTC m=+44.208119694" watchObservedRunningTime="2026-01-05 20:05:57.499464743 +0000 UTC m=+44.208648617" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.524749 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" event={"ID":"84640629-f797-4dea-bd98-e5331b7dca5f","Type":"ContainerStarted","Data":"c22fcd4f1e96d1d644a74c5fa37c4518a9f992812a878e8d2896ce9308704a2f"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.549531 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9vzl4" event={"ID":"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1","Type":"ContainerStarted","Data":"81d14d0123aa868aaf112c1fe92d9f2ae6de56a54c4c42130ce8ea0c02374c63"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.564709 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" event={"ID":"f06b2b1f-1c15-47e8-a043-be84aa593218","Type":"ContainerStarted","Data":"0c23cd64848a187f2b3b5dc626bf12b9dc1995160413e6b00ab0a975507c5d10"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.584805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" event={"ID":"7e4b1718-5732-4498-b355-25832e158871","Type":"ContainerStarted","Data":"76e29dd1ef0898e4411164913182422e55947e5cfa37f54bec2aa41091764da8"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.593590 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.599021 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.099004851 +0000 UTC m=+44.808188725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.611273 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fsmbh" podStartSLOduration=23.61125572 podStartE2EDuration="23.61125572s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.565528162 +0000 UTC m=+44.274712036" watchObservedRunningTime="2026-01-05 20:05:57.61125572 +0000 UTC m=+44.320439594" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.611457 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" podStartSLOduration=24.611451764999998 podStartE2EDuration="24.611451765s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.606020974 +0000 UTC m=+44.315204848" watchObservedRunningTime="2026-01-05 20:05:57.611451765 +0000 UTC m=+44.320635639" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.628273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xgft" event={"ID":"f3ea7eb1-87d5-476b-bb30-2c94421afc41","Type":"ContainerStarted","Data":"e3bf420bd99de0e76d634ed9d8b91f9f1c4bd0fbf4094740ba1132c6f1e1adf8"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.628338 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" event={"ID":"3e55c0bf-988b-4b2e-b44b-2343b48ff9f8","Type":"ContainerStarted","Data":"3cbcabd88cdf9a1750e9412440b51da6a2f6796fe56cd73b18601a1189e094a3"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.628355 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.630426 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" event={"ID":"2920de00-a02e-425a-b3ae-2a7056eff257","Type":"ContainerStarted","Data":"e5acac6eec118ecbab68356a4e9c4053ccf5e90e33eff91cbb256faaf393d34c"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.641008 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podStartSLOduration=23.640991493 podStartE2EDuration="23.640991493s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.640257744 +0000 UTC m=+44.349441618" watchObservedRunningTime="2026-01-05 20:05:57.640991493 +0000 UTC m=+44.350175367" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.645200 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5zkp6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.645255 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podUID="3e55c0bf-988b-4b2e-b44b-2343b48ff9f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.645278 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" event={"ID":"8cdc6a43-0c41-4fe2-939b-805550abecd2","Type":"ContainerStarted","Data":"f9e8586001c5bd878b36a24a9127bddda22432317228bf8e61359c1bd5227cf0"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.651433 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerStarted","Data":"120d40c5b11106c3c14a6efb4fb0f565ff94324b3339fed8c291974b4fbb7bbf"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.655159 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xwbp" event={"ID":"632ecc3b-8601-4d41-a4f4-8940db9dedae","Type":"ContainerStarted","Data":"ab1942092fbde3678326ee04014cabe06373cdb16d216ae0f4b6d89e84023f64"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.657902 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" event={"ID":"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9","Type":"ContainerStarted","Data":"eb69e6f6aa5c5811b8b71af6c97257948d2e082371cce5eaa4d4ba733894ce2c"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.657938 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" event={"ID":"d2e8cb54-b9ec-4e60-897f-edf05d4cd9b9","Type":"ContainerStarted","Data":"b516cb1ae777bfda1aaeb8d7967d549aca28ac701fffba8cc59647d268820e23"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.667665 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" event={"ID":"2dc37596-5879-4c76-be1b-5c95376cf1f2","Type":"ContainerStarted","Data":"4742c1db77c21814bb3cce15843ee2a2122baf969abb726b85c06a5a39a89679"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.673704 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-csf7z" podStartSLOduration=23.673689284 podStartE2EDuration="23.673689284s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.673404716 +0000 UTC m=+44.382588590" watchObservedRunningTime="2026-01-05 20:05:57.673689284 +0000 UTC m=+44.382873158" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.674252 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" event={"ID":"ce973cd6-320d-45f5-a09a-037560783218","Type":"ContainerStarted","Data":"84257aea265d2f4587c6d6b442f36d1886839a844979825246dfb55a163e0719"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.675475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" event={"ID":"1c252969-10b1-47d5-afef-b58cb4895766","Type":"ContainerStarted","Data":"1489efe82e7b9cc73f4d8e33d3bb71a93ad2b27d9f8fac5bcb6ded1775329eba"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.675504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" event={"ID":"1c252969-10b1-47d5-afef-b58cb4895766","Type":"ContainerStarted","Data":"539662a38caf5aeda09057e236696ae359302ed3024bf8e7b32e9c64396b0f41"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.676986 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.678196 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" event={"ID":"f39a32ad-c337-4f04-b2ac-9a55729d7d4c","Type":"ContainerStarted","Data":"c70b3ca40c2ebe23eaf9948719eff3710817ad95f5b91e747438ad17270b35ce"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.682103 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" event={"ID":"701a029a-d767-4681-8bf3-dffdc73e93f5","Type":"ContainerStarted","Data":"3b91eced025facbb9b14800e2ec1b7e25568befb589d774f9ed17d199371e6fc"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.682955 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.685688 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" event={"ID":"743b4687-81ef-4cd4-9910-dc7ba348e457","Type":"ContainerStarted","Data":"36ecc3139876807b4c24815a6ec63e5d778529a51530c4fcb5aa019007f375db"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.687369 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" event={"ID":"5d70b68b-24e2-45e2-91ca-8e1b88805b1a","Type":"ContainerStarted","Data":"231d9fb59ff961eaf491b48850bfbdf37bfa2fd82cb67930c0ff27a9c584ec20"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.689630 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.689672 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.690235 4754 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nzhrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.690263 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.691341 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" event={"ID":"24911cb5-f93b-4852-adbc-d06821f34d47","Type":"ContainerStarted","Data":"3c3c9291ad41bebc3a4904687920eb2fc5ce81859fd2c6ae6ffce3f657103106"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.694343 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.695468 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.19545285 +0000 UTC m=+44.904636724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.696866 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podStartSLOduration=23.696847716 podStartE2EDuration="23.696847716s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.694135536 +0000 UTC m=+44.403319400" watchObservedRunningTime="2026-01-05 20:05:57.696847716 +0000 UTC m=+44.406031590" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.696935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" event={"ID":"40317dbe-a59b-4fc6-938d-91a19217745d","Type":"ContainerStarted","Data":"f12e95bf5b4af14859546e9ac621ab57245f65f1c06f76ccce907bc82bf250d4"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.703655 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" event={"ID":"0881b9ae-bc95-42f6-8dd3-73a86569dde7","Type":"ContainerStarted","Data":"d52f92de04dad84513038e93ced7d7a99c7f4082a3d9605691e812d270379429"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.708805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" event={"ID":"e06065f7-b8b3-4c3e-820c-f0051f3a6f6d","Type":"ContainerStarted","Data":"e2e3e0e194e8b68bf8033d85ef57fb1bda4e59b7119e5a00ef74ce818b4a442f"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.711044 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" event={"ID":"d2a2026b-4828-435f-ba14-941e12d3ea36","Type":"ContainerStarted","Data":"b51c360d1c8144c747dd1f00ae9f19cbe8aa2b9417577151801789e386d7c911"} Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.713513 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.713557 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.716580 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" podStartSLOduration=25.716568239 podStartE2EDuration="25.716568239s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.715866561 +0000 UTC m=+44.425050435" watchObservedRunningTime="2026-01-05 20:05:57.716568239 +0000 UTC m=+44.425752113" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.738514 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dx842" podStartSLOduration=23.738498249 podStartE2EDuration="23.738498249s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.737604916 +0000 UTC m=+44.446788790" watchObservedRunningTime="2026-01-05 20:05:57.738498249 +0000 UTC m=+44.447682123" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.755072 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v7wtk" podStartSLOduration=23.75505337 podStartE2EDuration="23.75505337s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.754141776 +0000 UTC m=+44.463325650" watchObservedRunningTime="2026-01-05 20:05:57.75505337 +0000 UTC m=+44.464237244" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.772552 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4rsd7" podStartSLOduration=23.772535805 podStartE2EDuration="23.772535805s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.770906622 +0000 UTC m=+44.480090496" watchObservedRunningTime="2026-01-05 20:05:57.772535805 +0000 UTC m=+44.481719679" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.796553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.800749 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.300734618 +0000 UTC m=+45.009918492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.808791 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzs5t" podStartSLOduration=24.808772367 podStartE2EDuration="24.808772367s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.80812557 +0000 UTC m=+44.517309444" watchObservedRunningTime="2026-01-05 20:05:57.808772367 +0000 UTC m=+44.517956231" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.810800 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xnn4d" podStartSLOduration=23.81078871 podStartE2EDuration="23.81078871s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:57.791337124 +0000 UTC m=+44.500520988" watchObservedRunningTime="2026-01-05 20:05:57.81078871 +0000 UTC m=+44.519972584" Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.898263 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.898384 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.398364707 +0000 UTC m=+45.107548581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:57 crc kubenswrapper[4754]: I0105 20:05:57.898874 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:57 crc kubenswrapper[4754]: E0105 20:05:57.899190 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.399183509 +0000 UTC m=+45.108367383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:57.999974 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.000403 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.500388681 +0000 UTC m=+45.209572565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.102790 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.103312 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.603279647 +0000 UTC m=+45.312463521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.206669 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.207405 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.707368164 +0000 UTC m=+45.416552038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.224833 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.229968 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.230026 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.310628 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.310977 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.810967219 +0000 UTC m=+45.520151093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.412968 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.413546 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.913512526 +0000 UTC m=+45.622696400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.414059 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.414487 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:58.914470591 +0000 UTC m=+45.623654465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.515324 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.515717 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.015700474 +0000 UTC m=+45.724884348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.617503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.618068 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.118056946 +0000 UTC m=+45.827240820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.719436 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.720773 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.220748767 +0000 UTC m=+45.929932651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.747635 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.748431 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.248416117 +0000 UTC m=+45.957599981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.763332 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" event={"ID":"8cdc6a43-0c41-4fe2-939b-805550abecd2","Type":"ContainerStarted","Data":"735e053d260c5146e19dd5e8a57731dd47c31f3ef6dcc479ef6a373b7c9a80d0"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.786266 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" event={"ID":"0881b9ae-bc95-42f6-8dd3-73a86569dde7","Type":"ContainerStarted","Data":"f46a1b36c9b5f58e4bb8fb9254d20a7944355b5733624e86e3c31dbcbb988a05"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.802834 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5vs9t" event={"ID":"d0c79b4d-1027-4486-9eeb-1dc6f77f2ee0","Type":"ContainerStarted","Data":"01632b088073a54201ace01021a00ac0b8697b65385f07466c864ecd95a99931"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.820426 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" event={"ID":"3b6b3476-9191-4367-9437-ee9db002d523","Type":"ContainerStarted","Data":"7b6b07f1b644c1fa4a810b0cc97109f3ab54ce41f7ba2d13f2f607396e977fe2"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.822185 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" event={"ID":"f39a32ad-c337-4f04-b2ac-9a55729d7d4c","Type":"ContainerStarted","Data":"85ad6896f70ea3d092acb5022207e4a2125a6b3af8a8b4a4c0bc6b2c991ca701"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.822771 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.824189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" event={"ID":"84640629-f797-4dea-bd98-e5331b7dca5f","Type":"ContainerStarted","Data":"97561dc63dc1fb1a20bd0917aa5a59b6d9e05828e8d8d14b1e795e8b6ac2eddb"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.824697 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mhcqd" podStartSLOduration=24.82467741 podStartE2EDuration="24.82467741s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:58.8169805 +0000 UTC m=+45.526164384" watchObservedRunningTime="2026-01-05 20:05:58.82467741 +0000 UTC m=+45.533861284" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.829639 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" event={"ID":"e833c756-585b-44de-8ab0-ce6e72970539","Type":"ContainerStarted","Data":"0b82fd59b63e86ab392ef6325eb563644d929ffe9415f03c134eb7ee923015c2"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.851807 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.852230 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.352214256 +0000 UTC m=+46.061398120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.853282 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.853657 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.353649294 +0000 UTC m=+46.062833168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.862665 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" event={"ID":"9f282a77-59b3-4b2c-8c62-a2526d2a77b5","Type":"ContainerStarted","Data":"d5ae0da10b6f9aa06b9e2593943f28d9156c1950ac1ad2334e9bf55adda75217"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.896540 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xgft" event={"ID":"f3ea7eb1-87d5-476b-bb30-2c94421afc41","Type":"ContainerStarted","Data":"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.914666 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m7czd" event={"ID":"986880f2-b964-4762-baa1-3536a9ff36e1","Type":"ContainerStarted","Data":"5221a25dcf83b7f8d734b591ed480535971453d44863cdaf474d80f457043402"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.930422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" event={"ID":"6706045f-88d8-4afd-867a-d0560b8fb9e0","Type":"ContainerStarted","Data":"081a051b2a9e009f4f8ae83686418c307d540ab64ec476586f9bfa9c6ed61686"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.951390 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" event={"ID":"3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2","Type":"ContainerStarted","Data":"5dc94617d1102e87343f4e6e1845f6e91b8298c4d40ee60a7c9ae7a8aadf2a96"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.954556 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:58 crc kubenswrapper[4754]: E0105 20:05:58.955651 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.455637056 +0000 UTC m=+46.164820930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.970001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" event={"ID":"f06b2b1f-1c15-47e8-a043-be84aa593218","Type":"ContainerStarted","Data":"f326a08621f34b4b62408dea83240d7dbe6eb4cec524d9d728bb923c7adb8370"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.983359 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" event={"ID":"2dc37596-5879-4c76-be1b-5c95376cf1f2","Type":"ContainerStarted","Data":"a43591681464514acf7fb82ea937f8c49b023f47f2b8b9a7219550e0576c2c58"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.984856 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5vs9t" podStartSLOduration=9.984836406 podStartE2EDuration="9.984836406s" podCreationTimestamp="2026-01-05 20:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:58.904707132 +0000 UTC m=+45.613891006" watchObservedRunningTime="2026-01-05 20:05:58.984836406 +0000 UTC m=+45.694020280" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.985131 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podStartSLOduration=24.985126153 podStartE2EDuration="24.985126153s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:58.98116369 +0000 UTC m=+45.690347564" watchObservedRunningTime="2026-01-05 20:05:58.985126153 +0000 UTC m=+45.694310027" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.985209 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" event={"ID":"459e0439-ea70-4646-9cf7-2029f79e64b2","Type":"ContainerStarted","Data":"fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.988042 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerStarted","Data":"92a6ca4cb09d04d61122813085a0c01187e7196c4f8e8b79a7ba3f807431bf6a"} Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.989528 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.989577 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.995268 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.995744 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 05 20:05:58 crc kubenswrapper[4754]: I0105 20:05:58.999756 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.046338 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v7gsb" podStartSLOduration=25.046320195 podStartE2EDuration="25.046320195s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:05:59.044921748 +0000 UTC m=+45.754105622" watchObservedRunningTime="2026-01-05 20:05:59.046320195 +0000 UTC m=+45.755504069" Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.056548 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.057229 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.557204358 +0000 UTC m=+46.266388232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.158025 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.158236 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.658218415 +0000 UTC m=+46.367402289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.158865 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.161423 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.661414878 +0000 UTC m=+46.370598742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.263588 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.263806 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.76376557 +0000 UTC m=+46.472949444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.264052 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.264501 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.764474659 +0000 UTC m=+46.473658533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.364968 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.365389 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.865363793 +0000 UTC m=+46.574547667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.406141 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:05:59 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:05:59 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:05:59 crc kubenswrapper[4754]: healthz check failed Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.406226 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.467097 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.467404 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:05:59.967392316 +0000 UTC m=+46.676576190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.567811 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.568111 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.068063475 +0000 UTC m=+46.777247349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.568488 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.568848 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.068832035 +0000 UTC m=+46.778015909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.669680 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.669950 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.169909754 +0000 UTC m=+46.879093628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.670251 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.670726 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.170705134 +0000 UTC m=+46.879889008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.771957 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.772489 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.272470131 +0000 UTC m=+46.981654005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.823055 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sckmz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.823119 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podUID="f39a32ad-c337-4f04-b2ac-9a55729d7d4c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.873485 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.873930 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.37390944 +0000 UTC m=+47.083093394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.975986 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:05:59 crc kubenswrapper[4754]: E0105 20:05:59.976601 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.47658455 +0000 UTC m=+47.185768424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.989572 4754 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nzhrk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 20:05:59 crc kubenswrapper[4754]: I0105 20:05:59.989629 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.083885 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.084238 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.58422175 +0000 UTC m=+47.293405624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.108174 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" event={"ID":"296f71e4-2a83-467e-b40a-87b1e40330b9","Type":"ContainerStarted","Data":"b7d1ac3a524478338876f462c61fdb38cb354f692c96087e6e3d9622f91fad4f"} Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.112448 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.139133 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.146920 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.160023 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podStartSLOduration=11.15999152 podStartE2EDuration="11.15999152s" podCreationTimestamp="2026-01-05 20:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.157384133 +0000 UTC m=+46.866568007" watchObservedRunningTime="2026-01-05 20:06:00.15999152 +0000 UTC m=+46.869175394" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.182081 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4xgft" podStartSLOduration=27.182060374 podStartE2EDuration="27.182060374s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.181644644 +0000 UTC m=+46.890828508" watchObservedRunningTime="2026-01-05 20:06:00.182060374 +0000 UTC m=+46.891244248" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.185731 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.187537 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.687502486 +0000 UTC m=+47.396686360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.228041 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podStartSLOduration=27.22802034 podStartE2EDuration="27.22802034s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.203381339 +0000 UTC m=+46.912565213" watchObservedRunningTime="2026-01-05 20:06:00.22802034 +0000 UTC m=+46.937204214" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.228741 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m7czd" podStartSLOduration=11.228734888 podStartE2EDuration="11.228734888s" podCreationTimestamp="2026-01-05 20:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.226927901 +0000 UTC m=+46.936111765" watchObservedRunningTime="2026-01-05 20:06:00.228734888 +0000 UTC m=+46.937918762" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.232238 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:00 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:00 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:00 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.232326 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.251063 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99q7g" podStartSLOduration=27.251047509 podStartE2EDuration="27.251047509s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.247374413 +0000 UTC m=+46.956558287" watchObservedRunningTime="2026-01-05 20:06:00.251047509 +0000 UTC m=+46.960231383" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.277483 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" podStartSLOduration=26.277462456 podStartE2EDuration="26.277462456s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.276753577 +0000 UTC m=+46.985937451" watchObservedRunningTime="2026-01-05 20:06:00.277462456 +0000 UTC m=+46.986646330" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.287795 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.290349 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.79033323 +0000 UTC m=+47.499517104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.294637 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-w7nk2" podStartSLOduration=26.294622502 podStartE2EDuration="26.294622502s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.293571455 +0000 UTC m=+47.002755349" watchObservedRunningTime="2026-01-05 20:06:00.294622502 +0000 UTC m=+47.003806376" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.308816 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nmhmx" podStartSLOduration=26.308798351 podStartE2EDuration="26.308798351s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:00.308153294 +0000 UTC m=+47.017337168" watchObservedRunningTime="2026-01-05 20:06:00.308798351 +0000 UTC m=+47.017982225" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.386346 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.389165 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.389764 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.889736396 +0000 UTC m=+47.598920270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.389874 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.390164 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.390224 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.890211958 +0000 UTC m=+47.599395832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.491403 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.491544 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.991515603 +0000 UTC m=+47.700699487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.491743 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.492076 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:00.992065707 +0000 UTC m=+47.701249671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.592368 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.592666 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.092635413 +0000 UTC m=+47.801819297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.694036 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.694464 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.194444631 +0000 UTC m=+47.903628515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.794772 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.794960 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.294918544 +0000 UTC m=+48.004102418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.795049 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.795429 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.295420977 +0000 UTC m=+48.004604851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.796614 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rqlrs"] Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.827912 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.828806 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.831053 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.848975 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.876737 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.895730 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.895876 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.395851009 +0000 UTC m=+48.105034883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.896038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.896076 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vjp\" (UniqueName: \"kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.896264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.896443 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.896597 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.396581428 +0000 UTC m=+48.105765362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.997587 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.997714 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.497689708 +0000 UTC m=+48.206873622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.998101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.998199 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84vjp\" (UniqueName: \"kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.998684 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:00 crc kubenswrapper[4754]: E0105 20:06:00.999088 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.499072564 +0000 UTC m=+48.208256478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.998731 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:00 crc kubenswrapper[4754]: I0105 20:06:00.999486 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.010366 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.024848 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84vjp\" (UniqueName: \"kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp\") pod \"community-operators-qh2hz\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.038117 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.040218 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.042972 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.052402 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.108736 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.108928 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.608905681 +0000 UTC m=+48.318089555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.109320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.109412 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdmf\" (UniqueName: \"kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.109470 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.109515 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.109681 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.60966361 +0000 UTC m=+48.318847504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.115948 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" event={"ID":"d2a2026b-4828-435f-ba14-941e12d3ea36","Type":"ContainerStarted","Data":"50a8f81c54a2abf457772c35f2a14895c823b7c74ae9db7513c88d4cd04dfbc1"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.119511 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9vzl4" event={"ID":"fd4959ff-bb15-47bd-ae6c-1b0e53fc84f1","Type":"ContainerStarted","Data":"1f04c8e14613de50e513d49b631ac10923110e30e9daa667c254024a3334d62c"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.121107 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" event={"ID":"e833c756-585b-44de-8ab0-ce6e72970539","Type":"ContainerStarted","Data":"8f68ca96e4ad909ebfd88e3427e56a4ef309f8d6b9cc694d75401e6adab549d7"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.123211 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" event={"ID":"f06b2b1f-1c15-47e8-a043-be84aa593218","Type":"ContainerStarted","Data":"ed8aba0b87be6fc8b0f407c2129a8ad31322fe062b67729e7d2630deab130037"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.126410 4754 generic.go:334] "Generic (PLEG): container finished" podID="7e4b1718-5732-4498-b355-25832e158871" containerID="76e29dd1ef0898e4411164913182422e55947e5cfa37f54bec2aa41091764da8" exitCode=0 Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.126458 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" event={"ID":"7e4b1718-5732-4498-b355-25832e158871","Type":"ContainerDied","Data":"76e29dd1ef0898e4411164913182422e55947e5cfa37f54bec2aa41091764da8"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.134316 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2qc8v" podStartSLOduration=29.134283341 podStartE2EDuration="29.134283341s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:01.13119094 +0000 UTC m=+47.840374814" watchObservedRunningTime="2026-01-05 20:06:01.134283341 +0000 UTC m=+47.843467215" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.135274 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" event={"ID":"8cdc6a43-0c41-4fe2-939b-805550abecd2","Type":"ContainerStarted","Data":"bb43305c3b629b026cc27f38576b1a08b385fa4bf41d879f921e7874d8ceb142"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.140211 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.141877 4754 generic.go:334] "Generic (PLEG): container finished" podID="ce973cd6-320d-45f5-a09a-037560783218" containerID="b783236d8cc595116f381cf2704bb271b8c2127a364866a3e62c389a9f570d7f" exitCode=0 Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.141928 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" event={"ID":"ce973cd6-320d-45f5-a09a-037560783218","Type":"ContainerDied","Data":"b783236d8cc595116f381cf2704bb271b8c2127a364866a3e62c389a9f570d7f"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.146370 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xwbp" event={"ID":"632ecc3b-8601-4d41-a4f4-8940db9dedae","Type":"ContainerStarted","Data":"a88c509a4316fcf10b0980fffb36b7953b1e5a57b15019ef4ed29621ae2bfef6"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.150030 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" event={"ID":"6706045f-88d8-4afd-867a-d0560b8fb9e0","Type":"ContainerStarted","Data":"15f7b02101f4f19e8933516a710eee834dca077369051fc961ffbca877f7a6a6"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.154101 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" event={"ID":"03f29d3f-9221-484d-aa70-8889d57f7de1","Type":"ContainerStarted","Data":"c14a667e03119baa9579e4e6b2c82a92c0e2a76636512e0d3b49e813d33b7e17"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.156166 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" event={"ID":"2920de00-a02e-425a-b3ae-2a7056eff257","Type":"ContainerStarted","Data":"5900d8b6f7c75c179256376fcfbaf5ed71ba3a52a188c71eb9bda59d34cada00"} Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.159432 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.159476 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.185385 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9vzl4" podStartSLOduration=28.185367998 podStartE2EDuration="28.185367998s" podCreationTimestamp="2026-01-05 20:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:01.171553399 +0000 UTC m=+47.880737273" watchObservedRunningTime="2026-01-05 20:06:01.185367998 +0000 UTC m=+47.894551872" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.199976 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" podStartSLOduration=27.199960608 podStartE2EDuration="27.199960608s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:01.186743604 +0000 UTC m=+47.895927478" watchObservedRunningTime="2026-01-05 20:06:01.199960608 +0000 UTC m=+47.909144482" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.200925 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-scx4b" podStartSLOduration=27.200920413 podStartE2EDuration="27.200920413s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:01.199300811 +0000 UTC m=+47.908484685" watchObservedRunningTime="2026-01-05 20:06:01.200920413 +0000 UTC m=+47.910104287" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.210321 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.210559 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdmf\" (UniqueName: \"kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.210585 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.210602 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.211065 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.212042 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.712027072 +0000 UTC m=+48.421210946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.217579 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5gjwq" podStartSLOduration=27.217567266 podStartE2EDuration="27.217567266s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:01.215491392 +0000 UTC m=+47.924675266" watchObservedRunningTime="2026-01-05 20:06:01.217567266 +0000 UTC m=+47.926751140" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.224989 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.227567 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.230955 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:01 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:01 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:01 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.231020 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.247738 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.314704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.315693 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.815679358 +0000 UTC m=+48.524863232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.322235 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.328282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdmf\" (UniqueName: \"kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf\") pod \"certified-operators-pg5fc\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.363153 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.415624 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.415965 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.416011 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwg4j\" (UniqueName: \"kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.416056 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.416173 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:01.916143161 +0000 UTC m=+48.625327035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.436020 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.437075 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.459518 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.516907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.516944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.516974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwg4j\" (UniqueName: \"kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.517019 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.518058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.518117 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.018101613 +0000 UTC m=+48.727285487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.518236 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.557997 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwg4j\" (UniqueName: \"kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j\") pod \"community-operators-xm8jv\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.617726 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.618269 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.618307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vkp5\" (UniqueName: \"kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.618351 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.618735 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.11872135 +0000 UTC m=+48.827905224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.721915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.721949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vkp5\" (UniqueName: \"kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.721978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.722005 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.722441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.722639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.735835 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.235814975 +0000 UTC m=+48.944998849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.741995 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vkp5\" (UniqueName: \"kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5\") pod \"certified-operators-4zc9h\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.775896 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.775958 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:06:01 crc kubenswrapper[4754]: W0105 20:06:01.800524 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20730210_a087_46e2_a311_ffa8a3bc370d.slice/crio-6b9012971ef18c399c87e5cf080de12557d44cb98fc3049ed50eb38067fa1180 WatchSource:0}: Error finding container 6b9012971ef18c399c87e5cf080de12557d44cb98fc3049ed50eb38067fa1180: Status 404 returned error can't find the container with id 6b9012971ef18c399c87e5cf080de12557d44cb98fc3049ed50eb38067fa1180 Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.823015 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.823285 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.323258659 +0000 UTC m=+49.032442533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.823363 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.823741 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.323730652 +0000 UTC m=+49.032914526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.846584 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.924777 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:01 crc kubenswrapper[4754]: E0105 20:06:01.925184 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.42516901 +0000 UTC m=+49.134352884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.937007 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.937058 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.937481 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 05 20:06:01 crc kubenswrapper[4754]: I0105 20:06:01.937510 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.012483 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.026376 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.026721 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.526709851 +0000 UTC m=+49.235893725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.127472 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.127928 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.627913793 +0000 UTC m=+49.337097667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.161401 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerStarted","Data":"6b9012971ef18c399c87e5cf080de12557d44cb98fc3049ed50eb38067fa1180"} Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.162615 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" gracePeriod=30 Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.184644 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s26xc" podStartSLOduration=28.184626188 podStartE2EDuration="28.184626188s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:02.181771564 +0000 UTC m=+48.890955438" watchObservedRunningTime="2026-01-05 20:06:02.184626188 +0000 UTC m=+48.893810062" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.202353 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7rbg4" podStartSLOduration=28.202335879 podStartE2EDuration="28.202335879s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:02.202048531 +0000 UTC m=+48.911232415" watchObservedRunningTime="2026-01-05 20:06:02.202335879 +0000 UTC m=+48.911519753" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.225130 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" podStartSLOduration=28.225111401 podStartE2EDuration="28.225111401s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:02.222873873 +0000 UTC m=+48.932057767" watchObservedRunningTime="2026-01-05 20:06:02.225111401 +0000 UTC m=+48.934295275" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.228137 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:02 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:02 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:02 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.228196 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.229034 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.230906 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.730893922 +0000 UTC m=+49.440077796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.264508 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.330195 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.331305 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.831275183 +0000 UTC m=+49.540459057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.432015 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.432360 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:02.932349041 +0000 UTC m=+49.641532915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.485936 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.535079 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.535529 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.035514305 +0000 UTC m=+49.744698179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.637047 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume\") pod \"7e4b1718-5732-4498-b355-25832e158871\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.637376 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnr4\" (UniqueName: \"kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4\") pod \"7e4b1718-5732-4498-b355-25832e158871\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.637419 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume\") pod \"7e4b1718-5732-4498-b355-25832e158871\" (UID: \"7e4b1718-5732-4498-b355-25832e158871\") " Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.637892 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.638178 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.138165705 +0000 UTC m=+49.847349579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.643912 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e4b1718-5732-4498-b355-25832e158871" (UID: "7e4b1718-5732-4498-b355-25832e158871"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.647655 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e4b1718-5732-4498-b355-25832e158871" (UID: "7e4b1718-5732-4498-b355-25832e158871"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.653974 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4" (OuterVolumeSpecName: "kube-api-access-pcnr4") pod "7e4b1718-5732-4498-b355-25832e158871" (UID: "7e4b1718-5732-4498-b355-25832e158871"). InnerVolumeSpecName "kube-api-access-pcnr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.738536 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.739252 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.239205823 +0000 UTC m=+49.948389697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.739535 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.739751 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e4b1718-5732-4498-b355-25832e158871-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.739768 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnr4\" (UniqueName: \"kubernetes.io/projected/7e4b1718-5732-4498-b355-25832e158871-kube-api-access-pcnr4\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.739780 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e4b1718-5732-4498-b355-25832e158871-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.740071 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.240061755 +0000 UTC m=+49.949245629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.788409 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:06:02 crc kubenswrapper[4754]: W0105 20:06:02.799798 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba326117_1dcc_4468_80bb_a54b9cc83c01.slice/crio-a2b600ad7e902a5a62d1c4b2f3e50b801a2eb1021d957a73a2759953ef249690 WatchSource:0}: Error finding container a2b600ad7e902a5a62d1c4b2f3e50b801a2eb1021d957a73a2759953ef249690: Status 404 returned error can't find the container with id a2b600ad7e902a5a62d1c4b2f3e50b801a2eb1021d957a73a2759953ef249690 Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.841011 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.841516 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.341495903 +0000 UTC m=+50.050679777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:02 crc kubenswrapper[4754]: I0105 20:06:02.943133 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:02 crc kubenswrapper[4754]: E0105 20:06:02.943487 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.443475605 +0000 UTC m=+50.152659479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.026058 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.026425 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4b1718-5732-4498-b355-25832e158871" containerName="collect-profiles" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.026454 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4b1718-5732-4498-b355-25832e158871" containerName="collect-profiles" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.026647 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4b1718-5732-4498-b355-25832e158871" containerName="collect-profiles" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.028169 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.032473 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.044585 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.045122 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.545099118 +0000 UTC m=+50.254283022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.074129 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.139137 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.146688 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.146758 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.146820 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.146843 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pftt\" (UniqueName: \"kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.147162 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.647142452 +0000 UTC m=+50.356326366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.154556 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.171066 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xwbp" event={"ID":"632ecc3b-8601-4d41-a4f4-8940db9dedae","Type":"ContainerStarted","Data":"fef9f26dfe9937553666637e5bc5679154f92ce5bc71b02f63a1984d3099ba1a"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.172692 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerStarted","Data":"283542987a1d067b8ae6231b4401d6384a4a94c1df884164e15729007c4f6dae"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.174102 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" event={"ID":"7e4b1718-5732-4498-b355-25832e158871","Type":"ContainerDied","Data":"20a3b710e97f682be10a8ac0025099c5fc8d4cf5b5c3993873a19e7bee90f763"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.174136 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a3b710e97f682be10a8ac0025099c5fc8d4cf5b5c3993873a19e7bee90f763" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.174207 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.178234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerStarted","Data":"e30414288abc7d37c580a2a164c3ce7f28e595a063d92aacb1caf912d6f1b109"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.179251 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerStarted","Data":"a2b600ad7e902a5a62d1c4b2f3e50b801a2eb1021d957a73a2759953ef249690"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.181431 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" event={"ID":"ce973cd6-320d-45f5-a09a-037560783218","Type":"ContainerStarted","Data":"7ccc669fc7a821ccf7173def1b93407850752283d30a54555b699816cef209cd"} Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.210512 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.21049171 podStartE2EDuration="210.49171ms" podCreationTimestamp="2026-01-05 20:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:03.204226217 +0000 UTC m=+49.913410131" watchObservedRunningTime="2026-01-05 20:06:03.21049171 +0000 UTC m=+49.919675584" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.228586 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:03 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:03 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:03 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.228661 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.247703 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.248121 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.248229 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.248278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pftt\" (UniqueName: \"kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.248858 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.748835157 +0000 UTC m=+50.458019061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.249478 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.249730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.299250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pftt\" (UniqueName: \"kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt\") pod \"redhat-marketplace-f6htm\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.349629 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.349992 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.849979868 +0000 UTC m=+50.559163742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.388378 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.423364 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.424402 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.431622 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.461571 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.461940 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:03.96192435 +0000 UTC m=+50.671108224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.563650 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.563702 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.563747 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.563768 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svd9d\" (UniqueName: \"kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.564157 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.064140128 +0000 UTC m=+50.773324002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.581771 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:06:03 crc kubenswrapper[4754]: W0105 20:06:03.587670 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode16df7cb_3531_4b4e_ad51_275d4ff495d0.slice/crio-95248699f19701edd79dde31d5bae21743c2b5602810a673ede4f48053fe7e4d WatchSource:0}: Error finding container 95248699f19701edd79dde31d5bae21743c2b5602810a673ede4f48053fe7e4d: Status 404 returned error can't find the container with id 95248699f19701edd79dde31d5bae21743c2b5602810a673ede4f48053fe7e4d Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.625036 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.625365 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.665577 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.665847 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.665955 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.666010 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svd9d\" (UniqueName: \"kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.666524 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.166504221 +0000 UTC m=+50.875688095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.667035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.667231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.692273 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svd9d\" (UniqueName: \"kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d\") pod \"redhat-marketplace-mq9f8\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.746476 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.767687 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.768213 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.268190985 +0000 UTC m=+50.977374919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.780594 4754 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ql695 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]log ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]etcd ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 05 20:06:03 crc kubenswrapper[4754]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/max-in-flight-filter ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 05 20:06:03 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 05 20:06:03 crc kubenswrapper[4754]: livez check failed Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.780667 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" podUID="296f71e4-2a83-467e-b40a-87b1e40330b9" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.869463 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.869969 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.369949272 +0000 UTC m=+51.079133146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.883488 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.926899 4754 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 05 20:06:03 crc kubenswrapper[4754]: I0105 20:06:03.973273 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:03 crc kubenswrapper[4754]: E0105 20:06:03.973734 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.473718721 +0000 UTC m=+51.182902595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.028438 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.029553 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.034648 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.035622 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.036623 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.051683 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.065170 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.074199 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.074278 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.574260766 +0000 UTC m=+51.283444640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.074645 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.074707 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.074773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ncj\" (UniqueName: \"kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.074797 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.076874 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.576853003 +0000 UTC m=+51.286036987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.077013 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.078131 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.088841 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.089589 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.090986 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.103451 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.110058 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.112465 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.112543 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.120406 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.120487 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.122467 4754 patch_prober.go:28] interesting pod/console-f9d7485db-4xgft container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.122528 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xgft" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.175651 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.175903 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ncj\" (UniqueName: \"kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.175996 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.176040 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.176440 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.176514 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.676499895 +0000 UTC m=+51.385683769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.176936 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.187233 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerStarted","Data":"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.188702 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerStarted","Data":"0842c03556dd94b67e801ea64b09d2cd0abb1d9f83cdc4671975f1bd5bf5446d"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.191615 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerStarted","Data":"666799136ac8acdd79d8df3a956e3b44b69494a9c01f1c44097dc5a582ca0d36"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.193161 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerStarted","Data":"95248699f19701edd79dde31d5bae21743c2b5602810a673ede4f48053fe7e4d"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.194189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerStarted","Data":"b3b58ba5579014bf4b7259076c99c989cc2a7a5af25685ffac3c218e68c0a2cc"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.195923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ncj\" (UniqueName: \"kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj\") pod \"redhat-operators-z52n8\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.195967 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" event={"ID":"03f29d3f-9221-484d-aa70-8889d57f7de1","Type":"ContainerStarted","Data":"fe944402f6770685a0679f35019e337bde4d144b5026c34c399bf58fa92e8f45"} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.196265 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9xwbp" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.225509 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.227816 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:04 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:04 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:04 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.227851 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.248247 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9xwbp" podStartSLOduration=15.248230981 podStartE2EDuration="15.248230981s" podCreationTimestamp="2026-01-05 20:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:04.217863971 +0000 UTC m=+50.927047835" watchObservedRunningTime="2026-01-05 20:06:04.248230981 +0000 UTC m=+50.957414855" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.250131 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:06:04 crc kubenswrapper[4754]: W0105 20:06:04.257528 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75e6f92_c2e3_4a65_be91_5921e2426aaf.slice/crio-e5e796d041916bcdc21e3cb99927a2147b77b19312ea5ef4844916151ca97bb7 WatchSource:0}: Error finding container e5e796d041916bcdc21e3cb99927a2147b77b19312ea5ef4844916151ca97bb7: Status 404 returned error can't find the container with id e5e796d041916bcdc21e3cb99927a2147b77b19312ea5ef4844916151ca97bb7 Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.277093 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.277142 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.277256 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.277565 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.777542363 +0000 UTC m=+51.486726327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.378787 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.379052 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.879012422 +0000 UTC m=+51.588196326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.379306 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.379403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.379419 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.379436 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.379701 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.87968814 +0000 UTC m=+51.588872014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.389427 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.399999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.408748 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.433861 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.435157 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.445709 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.481329 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.481684 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 20:06:04.981666042 +0000 UTC m=+51.690849916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.583106 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.583166 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txst\" (UniqueName: \"kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.583229 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.583316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:04 crc kubenswrapper[4754]: E0105 20:06:04.583645 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 20:06:05.083629574 +0000 UTC m=+51.792813448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q7tmr" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.599709 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:06:04 crc kubenswrapper[4754]: W0105 20:06:04.601175 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31687145_8349_44b0_9e77_de73e4738916.slice/crio-f877c34b0aece8098d6f34391e82a014933227cf264e424983d16ff5a9c51acb WatchSource:0}: Error finding container f877c34b0aece8098d6f34391e82a014933227cf264e424983d16ff5a9c51acb: Status 404 returned error can't find the container with id f877c34b0aece8098d6f34391e82a014933227cf264e424983d16ff5a9c51acb Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.620216 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.629356 4754 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-05T20:06:03.926924954Z","Handler":null,"Name":""} Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.632822 4754 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.632860 4754 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.684619 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.684843 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.684889 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txst\" (UniqueName: \"kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.684950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.684979 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.685267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.685491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.688325 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.690961 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.699720 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txst\" (UniqueName: \"kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst\") pod \"redhat-operators-6s8pm\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.759586 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.786721 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.786808 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.786837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.786867 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.787960 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.790098 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.790817 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.817075 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.829159 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.835265 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.937476 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:06:04 crc kubenswrapper[4754]: I0105 20:06:04.937893 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.022242 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q7tmr\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:05 crc kubenswrapper[4754]: W0105 20:06:05.145451 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-232f235c4e42efad575563af22f2b38d491f364f12855096391efd6d4f6f2ba2 WatchSource:0}: Error finding container 232f235c4e42efad575563af22f2b38d491f364f12855096391efd6d4f6f2ba2: Status 404 returned error can't find the container with id 232f235c4e42efad575563af22f2b38d491f364f12855096391efd6d4f6f2ba2 Jan 05 20:06:05 crc kubenswrapper[4754]: W0105 20:06:05.193936 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ffc94cd63dedb169c0d917780d81502d63b7edb3a3f2cf60e2f52ba2fa37bcb9 WatchSource:0}: Error finding container ffc94cd63dedb169c0d917780d81502d63b7edb3a3f2cf60e2f52ba2fa37bcb9: Status 404 returned error can't find the container with id ffc94cd63dedb169c0d917780d81502d63b7edb3a3f2cf60e2f52ba2fa37bcb9 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.204243 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ffc94cd63dedb169c0d917780d81502d63b7edb3a3f2cf60e2f52ba2fa37bcb9"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.207141 4754 generic.go:334] "Generic (PLEG): container finished" podID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerID="1292ee55bf25c8eb9c9981d0f4a4f02a72a9ef48fc8a62f3d9e28b716d3b89b7" exitCode=0 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.207225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerDied","Data":"1292ee55bf25c8eb9c9981d0f4a4f02a72a9ef48fc8a62f3d9e28b716d3b89b7"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.210941 4754 generic.go:334] "Generic (PLEG): container finished" podID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerID="b3b58ba5579014bf4b7259076c99c989cc2a7a5af25685ffac3c218e68c0a2cc" exitCode=0 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.210995 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerDied","Data":"b3b58ba5579014bf4b7259076c99c989cc2a7a5af25685ffac3c218e68c0a2cc"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.212284 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d93c1b83-0763-484f-9983-e96921203738","Type":"ContainerStarted","Data":"294fca15ff116ec7997a76833729c03829f5885c03b5f7123fb28e1c0ba0f155"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.213488 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"232f235c4e42efad575563af22f2b38d491f364f12855096391efd6d4f6f2ba2"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.216077 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerStarted","Data":"f877c34b0aece8098d6f34391e82a014933227cf264e424983d16ff5a9c51acb"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.217628 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerStarted","Data":"e5e796d041916bcdc21e3cb99927a2147b77b19312ea5ef4844916151ca97bb7"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.220829 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerID="666799136ac8acdd79d8df3a956e3b44b69494a9c01f1c44097dc5a582ca0d36" exitCode=0 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.220913 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerDied","Data":"666799136ac8acdd79d8df3a956e3b44b69494a9c01f1c44097dc5a582ca0d36"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.229316 4754 generic.go:334] "Generic (PLEG): container finished" podID="20730210-a087-46e2-a311-ffa8a3bc370d" containerID="4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b" exitCode=0 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.229631 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:05 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:05 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:05 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.229694 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.229654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerDied","Data":"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.229828 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.231401 4754 generic.go:334] "Generic (PLEG): container finished" podID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerID="0842c03556dd94b67e801ea64b09d2cd0abb1d9f83cdc4671975f1bd5bf5446d" exitCode=0 Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.232221 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerDied","Data":"0842c03556dd94b67e801ea64b09d2cd0abb1d9f83cdc4671975f1bd5bf5446d"} Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.284711 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.600842 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.704508 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.705367 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.709173 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.709468 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.719379 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.731619 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.731673 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.745994 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:06:05 crc kubenswrapper[4754]: W0105 20:06:05.750684 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde779ab_81b0_4dc7_a6a4_82db63d46577.slice/crio-a821a565894be20e851a644d1d18f23b1d322c4e175d04045ce7b6e46ca1586d WatchSource:0}: Error finding container a821a565894be20e851a644d1d18f23b1d322c4e175d04045ce7b6e46ca1586d: Status 404 returned error can't find the container with id a821a565894be20e851a644d1d18f23b1d322c4e175d04045ce7b6e46ca1586d Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.833125 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.833174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.833430 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:05 crc kubenswrapper[4754]: I0105 20:06:05.856249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.036166 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.228599 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:06 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:06 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:06 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.228855 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.229281 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 20:06:06 crc kubenswrapper[4754]: W0105 20:06:06.232832 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod58b944aa_786c_43ce_b42e_fb89b4842d5f.slice/crio-77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524 WatchSource:0}: Error finding container 77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524: Status 404 returned error can't find the container with id 77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524 Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.240551 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58b944aa-786c-43ce-b42e-fb89b4842d5f","Type":"ContainerStarted","Data":"77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524"} Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.242320 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" event={"ID":"cde779ab-81b0-4dc7-a6a4-82db63d46577","Type":"ContainerStarted","Data":"a821a565894be20e851a644d1d18f23b1d322c4e175d04045ce7b6e46ca1586d"} Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.253752 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7769787b8a38cff9d004bc0062890acbc24c659d1d51ff271a392436b3bd497b"} Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.274481 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerStarted","Data":"ed23ca3814e45a3498b6dae6814f1d703d8f8f0379b8def9ae83ba2d46975dc7"} Jan 05 20:06:06 crc kubenswrapper[4754]: I0105 20:06:06.275558 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:06:07 crc kubenswrapper[4754]: I0105 20:06:07.228679 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:07 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:07 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:07 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:07 crc kubenswrapper[4754]: I0105 20:06:07.229234 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.228453 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:08 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:08 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:08 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.228525 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.288262 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d93c1b83-0763-484f-9983-e96921203738","Type":"ContainerStarted","Data":"29b64fddc2f0190091c5220933a950b2a0aa6663d83c376f30a4b0ff9dde264b"} Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.289843 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6c094bf079dddf8fafaefa9157f0bfa6bfd7c69305a8a2f0d04f2c98d163b437"} Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.628039 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:06:08 crc kubenswrapper[4754]: I0105 20:06:08.633864 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.101724 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9xwbp" Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.227784 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:09 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:09 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:09 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.227852 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.295777 4754 generic.go:334] "Generic (PLEG): container finished" podID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerID="81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86" exitCode=0 Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.295842 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerDied","Data":"81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.297655 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" event={"ID":"ce973cd6-320d-45f5-a09a-037560783218","Type":"ContainerStarted","Data":"39dcba2118eee707e7e60a6eff9f1dedfc8e1289a057809c08a430a0b0a5b67a"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.298514 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c83026077d8a209d618bc9fa9963d428497c20b591e707882ba814201f0e1c31"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.299314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58b944aa-786c-43ce-b42e-fb89b4842d5f","Type":"ContainerStarted","Data":"230d5c44322ecc1189bc64240e40b037f2794b660279c8863e15e7c440ecc732"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.300055 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" event={"ID":"cde779ab-81b0-4dc7-a6a4-82db63d46577","Type":"ContainerStarted","Data":"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.300901 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"572428020ecdae6dcc6e46d029c547018fa2b5f417d7445e7432b927a7b7e3a1"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.302200 4754 generic.go:334] "Generic (PLEG): container finished" podID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerID="d93c50b5f8e3d88a0b7bd93649134547a1ea2aa594aee3e1dfbd14f08b9edb99" exitCode=0 Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.302234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerDied","Data":"d93c50b5f8e3d88a0b7bd93649134547a1ea2aa594aee3e1dfbd14f08b9edb99"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.303146 4754 generic.go:334] "Generic (PLEG): container finished" podID="31687145-8349-44b0-9e77-de73e4738916" containerID="516fe302b48521acf7aaf85d50c60d84af871ada2a68486e6f373fe76a292f7d" exitCode=0 Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.303795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerDied","Data":"516fe302b48521acf7aaf85d50c60d84af871ada2a68486e6f373fe76a292f7d"} Jan 05 20:06:09 crc kubenswrapper[4754]: I0105 20:06:09.339578 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.33955338 podStartE2EDuration="5.33955338s" podCreationTimestamp="2026-01-05 20:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:09.321808378 +0000 UTC m=+56.030992292" watchObservedRunningTime="2026-01-05 20:06:09.33955338 +0000 UTC m=+56.048737274" Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.231984 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:10 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:10 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:10 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.232410 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.311735 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" event={"ID":"03f29d3f-9221-484d-aa70-8889d57f7de1","Type":"ContainerStarted","Data":"3080bdf9353b35ab97cd532efb1a6eaec91a209e1f64d7e0f7f5f3312644a832"} Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.313577 4754 generic.go:334] "Generic (PLEG): container finished" podID="58b944aa-786c-43ce-b42e-fb89b4842d5f" containerID="230d5c44322ecc1189bc64240e40b037f2794b660279c8863e15e7c440ecc732" exitCode=0 Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.313632 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58b944aa-786c-43ce-b42e-fb89b4842d5f","Type":"ContainerDied","Data":"230d5c44322ecc1189bc64240e40b037f2794b660279c8863e15e7c440ecc732"} Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.320744 4754 generic.go:334] "Generic (PLEG): container finished" podID="d93c1b83-0763-484f-9983-e96921203738" containerID="29b64fddc2f0190091c5220933a950b2a0aa6663d83c376f30a4b0ff9dde264b" exitCode=0 Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.320848 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d93c1b83-0763-484f-9983-e96921203738","Type":"ContainerDied","Data":"29b64fddc2f0190091c5220933a950b2a0aa6663d83c376f30a4b0ff9dde264b"} Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.321624 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.321950 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.434362 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" podStartSLOduration=36.434336564 podStartE2EDuration="36.434336564s" podCreationTimestamp="2026-01-05 20:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:10.412311191 +0000 UTC m=+57.121495065" watchObservedRunningTime="2026-01-05 20:06:10.434336564 +0000 UTC m=+57.143520438" Jan 05 20:06:10 crc kubenswrapper[4754]: I0105 20:06:10.470853 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" podStartSLOduration=38.470832213 podStartE2EDuration="38.470832213s" podCreationTimestamp="2026-01-05 20:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:10.46880192 +0000 UTC m=+57.177985794" watchObservedRunningTime="2026-01-05 20:06:10.470832213 +0000 UTC m=+57.180016087" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.233545 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:11 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:11 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:11 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.234004 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.332020 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" event={"ID":"03f29d3f-9221-484d-aa70-8889d57f7de1","Type":"ContainerStarted","Data":"225622442b3c9e01f8ac047736d0fa3f353323f4686d6d69cbf9d60dee2fb9ec"} Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.358526 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" podStartSLOduration=22.358506881 podStartE2EDuration="22.358506881s" podCreationTimestamp="2026-01-05 20:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:11.355957375 +0000 UTC m=+58.065141249" watchObservedRunningTime="2026-01-05 20:06:11.358506881 +0000 UTC m=+58.067690765" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.591836 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.596271 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.728840 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access\") pod \"d93c1b83-0763-484f-9983-e96921203738\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.728931 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir\") pod \"d93c1b83-0763-484f-9983-e96921203738\" (UID: \"d93c1b83-0763-484f-9983-e96921203738\") " Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729010 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir\") pod \"58b944aa-786c-43ce-b42e-fb89b4842d5f\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729041 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access\") pod \"58b944aa-786c-43ce-b42e-fb89b4842d5f\" (UID: \"58b944aa-786c-43ce-b42e-fb89b4842d5f\") " Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729067 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d93c1b83-0763-484f-9983-e96921203738" (UID: "d93c1b83-0763-484f-9983-e96921203738"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729151 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "58b944aa-786c-43ce-b42e-fb89b4842d5f" (UID: "58b944aa-786c-43ce-b42e-fb89b4842d5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729308 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58b944aa-786c-43ce-b42e-fb89b4842d5f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.729323 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93c1b83-0763-484f-9983-e96921203738-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.734757 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d93c1b83-0763-484f-9983-e96921203738" (UID: "d93c1b83-0763-484f-9983-e96921203738"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.736549 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "58b944aa-786c-43ce-b42e-fb89b4842d5f" (UID: "58b944aa-786c-43ce-b42e-fb89b4842d5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.830994 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d93c1b83-0763-484f-9983-e96921203738-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.831034 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58b944aa-786c-43ce-b42e-fb89b4842d5f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:11 crc kubenswrapper[4754]: I0105 20:06:11.943206 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-q6l9h" Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.230129 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:12 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:12 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:12 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.230237 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.340128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58b944aa-786c-43ce-b42e-fb89b4842d5f","Type":"ContainerDied","Data":"77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524"} Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.340173 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77dbe08dbaf35587a4be44b0f37d2f259464b8134e8d4cf1f73725706a458524" Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.340229 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.345452 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.345461 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d93c1b83-0763-484f-9983-e96921203738","Type":"ContainerDied","Data":"294fca15ff116ec7997a76833729c03829f5885c03b5f7123fb28e1c0ba0f155"} Jan 05 20:06:12 crc kubenswrapper[4754]: I0105 20:06:12.345504 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294fca15ff116ec7997a76833729c03829f5885c03b5f7123fb28e1c0ba0f155" Jan 05 20:06:13 crc kubenswrapper[4754]: I0105 20:06:13.233583 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:13 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:13 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:13 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:13 crc kubenswrapper[4754]: I0105 20:06:13.233685 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:13 crc kubenswrapper[4754]: I0105 20:06:13.994867 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.082509 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.082552 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.089052 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:06:14 crc kubenswrapper[4754]: E0105 20:06:14.096499 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:14 crc kubenswrapper[4754]: E0105 20:06:14.097877 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:14 crc kubenswrapper[4754]: E0105 20:06:14.098850 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:14 crc kubenswrapper[4754]: E0105 20:06:14.098885 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.124015 4754 patch_prober.go:28] interesting pod/console-f9d7485db-4xgft container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.124071 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xgft" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.227999 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:14 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:14 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:14 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.228372 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:14 crc kubenswrapper[4754]: I0105 20:06:14.362679 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pxqjg" Jan 05 20:06:15 crc kubenswrapper[4754]: I0105 20:06:15.227245 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:15 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:15 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:15 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:15 crc kubenswrapper[4754]: I0105 20:06:15.227316 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:16 crc kubenswrapper[4754]: I0105 20:06:16.227697 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 20:06:16 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Jan 05 20:06:16 crc kubenswrapper[4754]: [+]process-running ok Jan 05 20:06:16 crc kubenswrapper[4754]: healthz check failed Jan 05 20:06:16 crc kubenswrapper[4754]: I0105 20:06:16.228174 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:06:17 crc kubenswrapper[4754]: I0105 20:06:17.228450 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:06:17 crc kubenswrapper[4754]: I0105 20:06:17.231620 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vnrxg" Jan 05 20:06:19 crc kubenswrapper[4754]: I0105 20:06:19.602988 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 05 20:06:23 crc kubenswrapper[4754]: I0105 20:06:23.609058 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.609040724 podStartE2EDuration="4.609040724s" podCreationTimestamp="2026-01-05 20:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:23.60812565 +0000 UTC m=+70.317309524" watchObservedRunningTime="2026-01-05 20:06:23.609040724 +0000 UTC m=+70.318224598" Jan 05 20:06:24 crc kubenswrapper[4754]: E0105 20:06:24.092797 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:24 crc kubenswrapper[4754]: E0105 20:06:24.095033 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:24 crc kubenswrapper[4754]: E0105 20:06:24.097099 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:24 crc kubenswrapper[4754]: E0105 20:06:24.097264 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:06:24 crc kubenswrapper[4754]: I0105 20:06:24.124840 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:06:24 crc kubenswrapper[4754]: I0105 20:06:24.130111 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:06:25 crc kubenswrapper[4754]: I0105 20:06:25.291179 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:06:32 crc kubenswrapper[4754]: I0105 20:06:32.478846 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rqlrs_459e0439-ea70-4646-9cf7-2029f79e64b2/kube-multus-additional-cni-plugins/0.log" Jan 05 20:06:32 crc kubenswrapper[4754]: I0105 20:06:32.479150 4754 generic.go:334] "Generic (PLEG): container finished" podID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" exitCode=137 Jan 05 20:06:32 crc kubenswrapper[4754]: I0105 20:06:32.479204 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" event={"ID":"459e0439-ea70-4646-9cf7-2029f79e64b2","Type":"ContainerDied","Data":"fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1"} Jan 05 20:06:34 crc kubenswrapper[4754]: I0105 20:06:34.064499 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" Jan 05 20:06:34 crc kubenswrapper[4754]: E0105 20:06:34.090415 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:34 crc kubenswrapper[4754]: E0105 20:06:34.091332 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:34 crc kubenswrapper[4754]: E0105 20:06:34.092020 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:34 crc kubenswrapper[4754]: E0105 20:06:34.092206 4754 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.863654 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 20:06:40 crc kubenswrapper[4754]: E0105 20:06:40.864247 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93c1b83-0763-484f-9983-e96921203738" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.864265 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93c1b83-0763-484f-9983-e96921203738" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: E0105 20:06:40.864284 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b944aa-786c-43ce-b42e-fb89b4842d5f" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.864336 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b944aa-786c-43ce-b42e-fb89b4842d5f" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.864470 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b944aa-786c-43ce-b42e-fb89b4842d5f" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.864496 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93c1b83-0763-484f-9983-e96921203738" containerName="pruner" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.864961 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.867535 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.867576 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.883045 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.988236 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:40 crc kubenswrapper[4754]: I0105 20:06:40.988329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:41 crc kubenswrapper[4754]: I0105 20:06:41.089269 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:41 crc kubenswrapper[4754]: I0105 20:06:41.089358 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:41 crc kubenswrapper[4754]: I0105 20:06:41.089394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:41 crc kubenswrapper[4754]: I0105 20:06:41.118605 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:41 crc kubenswrapper[4754]: I0105 20:06:41.194333 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.089185 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.089970 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.090645 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.090723 4754 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.689891 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.690379 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pftt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-f6htm_openshift-marketplace(e16df7cb-3531-4b4e-ad51-275d4ff495d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:44 crc kubenswrapper[4754]: E0105 20:06:44.691444 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-f6htm" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" Jan 05 20:06:44 crc kubenswrapper[4754]: I0105 20:06:44.840174 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.060040 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.060885 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.105152 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.143917 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.143997 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.144033 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.245234 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.245319 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.245345 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.245404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.245365 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.269125 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access\") pod \"installer-9-crc\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:45 crc kubenswrapper[4754]: I0105 20:06:45.382416 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:06:46 crc kubenswrapper[4754]: E0105 20:06:46.492528 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 05 20:06:46 crc kubenswrapper[4754]: E0105 20:06:46.493898 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vkp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4zc9h_openshift-marketplace(9653b33c-3e18-4d7f-81ed-febff4a00a35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:46 crc kubenswrapper[4754]: E0105 20:06:46.495689 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4zc9h" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.801069 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rqlrs_459e0439-ea70-4646-9cf7-2029f79e64b2/kube-multus-additional-cni-plugins/0.log" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.801559 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.805452 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.805567 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svd9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mq9f8_openshift-marketplace(e75e6f92-c2e3-4a65-be91-5921e2426aaf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.806754 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mq9f8" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.857275 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.857545 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84vjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qh2hz_openshift-marketplace(20730210-a087-46e2-a311-ffa8a3bc370d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.859370 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qh2hz" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.869033 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.869175 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwg4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xm8jv_openshift-marketplace(ba326117-1dcc-4468-80bb-a54b9cc83c01): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:47 crc kubenswrapper[4754]: E0105 20:06:47.870348 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xm8jv" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x49d6\" (UniqueName: \"kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6\") pod \"459e0439-ea70-4646-9cf7-2029f79e64b2\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881134 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir\") pod \"459e0439-ea70-4646-9cf7-2029f79e64b2\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881164 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready\") pod \"459e0439-ea70-4646-9cf7-2029f79e64b2\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881534 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist\") pod \"459e0439-ea70-4646-9cf7-2029f79e64b2\" (UID: \"459e0439-ea70-4646-9cf7-2029f79e64b2\") " Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881525 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "459e0439-ea70-4646-9cf7-2029f79e64b2" (UID: "459e0439-ea70-4646-9cf7-2029f79e64b2"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.881964 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready" (OuterVolumeSpecName: "ready") pod "459e0439-ea70-4646-9cf7-2029f79e64b2" (UID: "459e0439-ea70-4646-9cf7-2029f79e64b2"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.882348 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "459e0439-ea70-4646-9cf7-2029f79e64b2" (UID: "459e0439-ea70-4646-9cf7-2029f79e64b2"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.886756 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6" (OuterVolumeSpecName: "kube-api-access-x49d6") pod "459e0439-ea70-4646-9cf7-2029f79e64b2" (UID: "459e0439-ea70-4646-9cf7-2029f79e64b2"). InnerVolumeSpecName "kube-api-access-x49d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.982891 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x49d6\" (UniqueName: \"kubernetes.io/projected/459e0439-ea70-4646-9cf7-2029f79e64b2-kube-api-access-x49d6\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.982923 4754 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/459e0439-ea70-4646-9cf7-2029f79e64b2-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.982934 4754 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/459e0439-ea70-4646-9cf7-2029f79e64b2-ready\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:47 crc kubenswrapper[4754]: I0105 20:06:47.982943 4754 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/459e0439-ea70-4646-9cf7-2029f79e64b2-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.571126 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rqlrs_459e0439-ea70-4646-9cf7-2029f79e64b2/kube-multus-additional-cni-plugins/0.log" Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.571189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" event={"ID":"459e0439-ea70-4646-9cf7-2029f79e64b2","Type":"ContainerDied","Data":"c6861fa91756fce93404ff803eb215eceb4bd1341d3ebcde0164bfad3f17e43f"} Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.571252 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rqlrs" Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.571257 4754 scope.go:117] "RemoveContainer" containerID="fc8fa453fc70ac74bc88f6fb0b6a31f99aed636135f65eb8b21b1121b78193e1" Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.600713 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rqlrs"] Jan 05 20:06:48 crc kubenswrapper[4754]: I0105 20:06:48.602992 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rqlrs"] Jan 05 20:06:49 crc kubenswrapper[4754]: I0105 20:06:49.594008 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" path="/var/lib/kubelet/pods/459e0439-ea70-4646-9cf7-2029f79e64b2/volumes" Jan 05 20:06:50 crc kubenswrapper[4754]: I0105 20:06:50.785089 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.108281 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.108481 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcdmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pg5fc_openshift-marketplace(08caaa2f-775e-45c6-b097-b0550f593ff3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.111257 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pg5fc" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.158265 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.158478 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6ncj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z52n8_openshift-marketplace(31687145-8349-44b0-9e77-de73e4738916): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 20:06:51 crc kubenswrapper[4754]: E0105 20:06:51.159715 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z52n8" podUID="31687145-8349-44b0-9e77-de73e4738916" Jan 05 20:06:51 crc kubenswrapper[4754]: I0105 20:06:51.599075 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerStarted","Data":"01f83357d125017b7b727d59b9b6ed8517d67c2aff65c49a8ac1ca9f73df15a3"} Jan 05 20:06:51 crc kubenswrapper[4754]: I0105 20:06:51.607165 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 20:06:51 crc kubenswrapper[4754]: W0105 20:06:51.626026 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod850cae84_31d6_43c8_a3b8_9ebb3e5b158a.slice/crio-caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e WatchSource:0}: Error finding container caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e: Status 404 returned error can't find the container with id caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e Jan 05 20:06:51 crc kubenswrapper[4754]: I0105 20:06:51.659367 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 20:06:51 crc kubenswrapper[4754]: W0105 20:06:51.671005 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeb3dc040_e872_4c82_a70f_e74d6beb5027.slice/crio-8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc WatchSource:0}: Error finding container 8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc: Status 404 returned error can't find the container with id 8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.607237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"850cae84-31d6-43c8-a3b8-9ebb3e5b158a","Type":"ContainerStarted","Data":"9a5fd205655231e0eeb0541b0c198f81e3637fcf4fab4f1347d21168e97d1bd1"} Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.607544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"850cae84-31d6-43c8-a3b8-9ebb3e5b158a","Type":"ContainerStarted","Data":"caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e"} Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.608855 4754 generic.go:334] "Generic (PLEG): container finished" podID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerID="01f83357d125017b7b727d59b9b6ed8517d67c2aff65c49a8ac1ca9f73df15a3" exitCode=0 Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.608909 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerDied","Data":"01f83357d125017b7b727d59b9b6ed8517d67c2aff65c49a8ac1ca9f73df15a3"} Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.610697 4754 generic.go:334] "Generic (PLEG): container finished" podID="eb3dc040-e872-4c82-a70f-e74d6beb5027" containerID="f80e9e90f22f5ddafa084f57518e8ddefb324c97215fec9ab474468ee8846146" exitCode=0 Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.610719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb3dc040-e872-4c82-a70f-e74d6beb5027","Type":"ContainerDied","Data":"f80e9e90f22f5ddafa084f57518e8ddefb324c97215fec9ab474468ee8846146"} Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.610733 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb3dc040-e872-4c82-a70f-e74d6beb5027","Type":"ContainerStarted","Data":"8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc"} Jan 05 20:06:52 crc kubenswrapper[4754]: I0105 20:06:52.623919 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.623894187 podStartE2EDuration="7.623894187s" podCreationTimestamp="2026-01-05 20:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:06:52.619956435 +0000 UTC m=+99.329140309" watchObservedRunningTime="2026-01-05 20:06:52.623894187 +0000 UTC m=+99.333078071" Jan 05 20:06:53 crc kubenswrapper[4754]: I0105 20:06:53.618593 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerStarted","Data":"9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37"} Jan 05 20:06:53 crc kubenswrapper[4754]: I0105 20:06:53.648185 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6s8pm" podStartSLOduration=6.847633263 podStartE2EDuration="49.648168678s" podCreationTimestamp="2026-01-05 20:06:04 +0000 UTC" firstStartedPulling="2026-01-05 20:06:10.322845274 +0000 UTC m=+57.032029148" lastFinishedPulling="2026-01-05 20:06:53.123380659 +0000 UTC m=+99.832564563" observedRunningTime="2026-01-05 20:06:53.647035458 +0000 UTC m=+100.356219342" watchObservedRunningTime="2026-01-05 20:06:53.648168678 +0000 UTC m=+100.357352552" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.074769 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.177014 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir\") pod \"eb3dc040-e872-4c82-a70f-e74d6beb5027\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.177088 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access\") pod \"eb3dc040-e872-4c82-a70f-e74d6beb5027\" (UID: \"eb3dc040-e872-4c82-a70f-e74d6beb5027\") " Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.177153 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb3dc040-e872-4c82-a70f-e74d6beb5027" (UID: "eb3dc040-e872-4c82-a70f-e74d6beb5027"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.177348 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb3dc040-e872-4c82-a70f-e74d6beb5027-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.182655 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb3dc040-e872-4c82-a70f-e74d6beb5027" (UID: "eb3dc040-e872-4c82-a70f-e74d6beb5027"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.278718 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb3dc040-e872-4c82-a70f-e74d6beb5027-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.625454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"eb3dc040-e872-4c82-a70f-e74d6beb5027","Type":"ContainerDied","Data":"8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc"} Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.625530 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.625546 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a431e51c30357a1c5c617c96f500bbead57c27c10fef096473f956583c278cc" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.760279 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:54 crc kubenswrapper[4754]: I0105 20:06:54.760343 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:06:55 crc kubenswrapper[4754]: I0105 20:06:55.814934 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6s8pm" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" probeResult="failure" output=< Jan 05 20:06:55 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:06:55 crc kubenswrapper[4754]: > Jan 05 20:07:02 crc kubenswrapper[4754]: E0105 20:07:02.118981 4754 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.531s" Jan 05 20:07:04 crc kubenswrapper[4754]: I0105 20:07:04.812966 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:07:04 crc kubenswrapper[4754]: I0105 20:07:04.847872 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:07:05 crc kubenswrapper[4754]: I0105 20:07:05.454057 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:07:06 crc kubenswrapper[4754]: I0105 20:07:06.139143 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6s8pm" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" containerID="cri-o://9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" gracePeriod=2 Jan 05 20:07:09 crc kubenswrapper[4754]: I0105 20:07:09.167918 4754 generic.go:334] "Generic (PLEG): container finished" podID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerID="9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" exitCode=0 Jan 05 20:07:09 crc kubenswrapper[4754]: I0105 20:07:09.168022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerDied","Data":"9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37"} Jan 05 20:07:14 crc kubenswrapper[4754]: E0105 20:07:14.760517 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37 is running failed: container process not found" containerID="9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 20:07:14 crc kubenswrapper[4754]: E0105 20:07:14.761206 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37 is running failed: container process not found" containerID="9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 20:07:14 crc kubenswrapper[4754]: E0105 20:07:14.761699 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37 is running failed: container process not found" containerID="9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 20:07:14 crc kubenswrapper[4754]: E0105 20:07:14.761745 4754 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-6s8pm" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" Jan 05 20:07:15 crc kubenswrapper[4754]: I0105 20:07:15.820595 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" containerID="cri-o://3b91eced025facbb9b14800e2ec1b7e25568befb589d774f9ed17d199371e6fc" gracePeriod=15 Jan 05 20:07:17 crc kubenswrapper[4754]: I0105 20:07:17.226869 4754 generic.go:334] "Generic (PLEG): container finished" podID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerID="3b91eced025facbb9b14800e2ec1b7e25568befb589d774f9ed17d199371e6fc" exitCode=0 Jan 05 20:07:17 crc kubenswrapper[4754]: I0105 20:07:17.226933 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" event={"ID":"701a029a-d767-4681-8bf3-dffdc73e93f5","Type":"ContainerDied","Data":"3b91eced025facbb9b14800e2ec1b7e25568befb589d774f9ed17d199371e6fc"} Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.057398 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.085262 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities\") pod \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.085384 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5txst\" (UniqueName: \"kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst\") pod \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.085489 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content\") pod \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\" (UID: \"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.087143 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities" (OuterVolumeSpecName: "utilities") pod "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" (UID: "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.100747 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst" (OuterVolumeSpecName: "kube-api-access-5txst") pod "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" (UID: "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09"). InnerVolumeSpecName "kube-api-access-5txst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.187510 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.187547 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5txst\" (UniqueName: \"kubernetes.io/projected/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-kube-api-access-5txst\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.283711 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s8pm" event={"ID":"38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09","Type":"ContainerDied","Data":"ed23ca3814e45a3498b6dae6814f1d703d8f8f0379b8def9ae83ba2d46975dc7"} Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.284233 4754 scope.go:117] "RemoveContainer" containerID="9ade5c5ba7f427bfa75780d92bfd378db10a5ba636d45af011e4077cf902bd37" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.284435 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" (UID: "38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.283825 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s8pm" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.288626 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.319693 4754 scope.go:117] "RemoveContainer" containerID="01f83357d125017b7b727d59b9b6ed8517d67c2aff65c49a8ac1ca9f73df15a3" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.374549 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.379860 4754 scope.go:117] "RemoveContainer" containerID="d93c50b5f8e3d88a0b7bd93649134547a1ea2aa594aee3e1dfbd14f08b9edb99" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389150 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389190 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzjzh\" (UniqueName: \"kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389219 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389245 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389270 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389318 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389350 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389381 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389407 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389432 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389453 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389484 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389514 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.389582 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig\") pod \"701a029a-d767-4681-8bf3-dffdc73e93f5\" (UID: \"701a029a-d767-4681-8bf3-dffdc73e93f5\") " Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.390716 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.397797 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.398376 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.406247 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.406828 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.412803 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.413040 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.413226 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.413402 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.413509 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.414388 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh" (OuterVolumeSpecName: "kube-api-access-jzjzh") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "kube-api-access-jzjzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.431085 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.434877 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.434869 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "701a029a-d767-4681-8bf3-dffdc73e93f5" (UID: "701a029a-d767-4681-8bf3-dffdc73e93f5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.435774 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.438378 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6s8pm"] Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501253 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501314 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzjzh\" (UniqueName: \"kubernetes.io/projected/701a029a-d767-4681-8bf3-dffdc73e93f5-kube-api-access-jzjzh\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501332 4754 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501348 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501371 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501387 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501401 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501415 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501426 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501439 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501451 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501465 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501476 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.501487 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/701a029a-d767-4681-8bf3-dffdc73e93f5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:23 crc kubenswrapper[4754]: I0105 20:07:23.596196 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" path="/var/lib/kubelet/pods/38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09/volumes" Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.291594 4754 generic.go:334] "Generic (PLEG): container finished" podID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerID="dddcac1861aa8081d01dae8f264d8effd866804a0c12d14f499918ec510f8c8c" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.291712 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerDied","Data":"dddcac1861aa8081d01dae8f264d8effd866804a0c12d14f499918ec510f8c8c"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.297657 4754 generic.go:334] "Generic (PLEG): container finished" podID="31687145-8349-44b0-9e77-de73e4738916" containerID="f09d8a916b34d02dc7be3daf692d9f61cadb83640dc604966cb5925f81129c4d" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.297753 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerDied","Data":"f09d8a916b34d02dc7be3daf692d9f61cadb83640dc604966cb5925f81129c4d"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.301767 4754 generic.go:334] "Generic (PLEG): container finished" podID="20730210-a087-46e2-a311-ffa8a3bc370d" containerID="b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.302123 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerDied","Data":"b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.310335 4754 generic.go:334] "Generic (PLEG): container finished" podID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerID="8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.310419 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerDied","Data":"8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.318699 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerID="68754d24d7aa8791d398b0a33e3fafa095e1bf742e62dae8f66b427ba4b6955c" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.318803 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerDied","Data":"68754d24d7aa8791d398b0a33e3fafa095e1bf742e62dae8f66b427ba4b6955c"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.325002 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" event={"ID":"701a029a-d767-4681-8bf3-dffdc73e93f5","Type":"ContainerDied","Data":"065fb1a70dd2387066189414d60c36579d974d98ca4a9682e8c82decea3ee09e"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.325054 4754 scope.go:117] "RemoveContainer" containerID="3b91eced025facbb9b14800e2ec1b7e25568befb589d774f9ed17d199371e6fc" Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.325249 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nzhrk" Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.332888 4754 generic.go:334] "Generic (PLEG): container finished" podID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerID="5b9c7d81022cdf37b2a976db7f2c5e256e56aab0bc98c18a4d27b299d9c9efb1" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.332974 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerDied","Data":"5b9c7d81022cdf37b2a976db7f2c5e256e56aab0bc98c18a4d27b299d9c9efb1"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.341915 4754 generic.go:334] "Generic (PLEG): container finished" podID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerID="f68928d3218dbec09dd7609a60ac3a075fbb77c523cb3cce4a8b4a0194e0c6f0" exitCode=0 Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.342004 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerDied","Data":"f68928d3218dbec09dd7609a60ac3a075fbb77c523cb3cce4a8b4a0194e0c6f0"} Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.438305 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:07:24 crc kubenswrapper[4754]: I0105 20:07:24.446630 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nzhrk"] Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.353324 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerStarted","Data":"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.355090 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerStarted","Data":"0c4236a81108e383f7f72c5616ac1e16180aab7424751ee11a95eb3c30eb5ca0"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.357492 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerStarted","Data":"c529683b78117396fd1b24fe5d7425331d40f28b4cc8473a906634c62eabacaf"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.359304 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerStarted","Data":"2384d715e69bbab1550ea8532e5a4fd4bfe54d6b0ba8bf7763081f9ac61dd186"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.362358 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerStarted","Data":"cf581f438214eb5f36800db1bd5ea29d362b643bfdacc5977eaf2617d7d4bd38"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.364312 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerStarted","Data":"a44121cf6eec140f3a35fb84703654954c411693aac98013b1e410fc9e99b716"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.366592 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerStarted","Data":"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b"} Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.378405 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mq9f8" podStartSLOduration=7.934922704 podStartE2EDuration="1m22.378393093s" podCreationTimestamp="2026-01-05 20:06:03 +0000 UTC" firstStartedPulling="2026-01-05 20:06:10.323016429 +0000 UTC m=+57.032200303" lastFinishedPulling="2026-01-05 20:07:24.766486808 +0000 UTC m=+131.475670692" observedRunningTime="2026-01-05 20:07:25.37632736 +0000 UTC m=+132.085511234" watchObservedRunningTime="2026-01-05 20:07:25.378393093 +0000 UTC m=+132.087576957" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.392462 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xm8jv" podStartSLOduration=8.893892767 podStartE2EDuration="1m24.392447889s" podCreationTimestamp="2026-01-05 20:06:01 +0000 UTC" firstStartedPulling="2026-01-05 20:06:09.306394017 +0000 UTC m=+56.015577901" lastFinishedPulling="2026-01-05 20:07:24.804949129 +0000 UTC m=+131.514133023" observedRunningTime="2026-01-05 20:07:25.389686777 +0000 UTC m=+132.098870651" watchObservedRunningTime="2026-01-05 20:07:25.392447889 +0000 UTC m=+132.101631763" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.408951 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f6htm" podStartSLOduration=6.878260171 podStartE2EDuration="1m22.408941548s" podCreationTimestamp="2026-01-05 20:06:03 +0000 UTC" firstStartedPulling="2026-01-05 20:06:09.306452499 +0000 UTC m=+56.015636363" lastFinishedPulling="2026-01-05 20:07:24.837133856 +0000 UTC m=+131.546317740" observedRunningTime="2026-01-05 20:07:25.407959682 +0000 UTC m=+132.117143556" watchObservedRunningTime="2026-01-05 20:07:25.408941548 +0000 UTC m=+132.118125422" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.428678 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zc9h" podStartSLOduration=5.828945983 podStartE2EDuration="1m24.428660491s" podCreationTimestamp="2026-01-05 20:06:01 +0000 UTC" firstStartedPulling="2026-01-05 20:06:06.27520054 +0000 UTC m=+52.984384424" lastFinishedPulling="2026-01-05 20:07:24.874915048 +0000 UTC m=+131.584098932" observedRunningTime="2026-01-05 20:07:25.426650959 +0000 UTC m=+132.135834833" watchObservedRunningTime="2026-01-05 20:07:25.428660491 +0000 UTC m=+132.137844355" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.448036 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qh2hz" podStartSLOduration=9.02273268 podStartE2EDuration="1m25.448020884s" podCreationTimestamp="2026-01-05 20:06:00 +0000 UTC" firstStartedPulling="2026-01-05 20:06:08.291846901 +0000 UTC m=+55.001030775" lastFinishedPulling="2026-01-05 20:07:24.717135065 +0000 UTC m=+131.426318979" observedRunningTime="2026-01-05 20:07:25.446155536 +0000 UTC m=+132.155339410" watchObservedRunningTime="2026-01-05 20:07:25.448020884 +0000 UTC m=+132.157204758" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.467516 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z52n8" podStartSLOduration=6.913073926 podStartE2EDuration="1m21.467499581s" podCreationTimestamp="2026-01-05 20:06:04 +0000 UTC" firstStartedPulling="2026-01-05 20:06:10.323183943 +0000 UTC m=+57.032367817" lastFinishedPulling="2026-01-05 20:07:24.877609598 +0000 UTC m=+131.586793472" observedRunningTime="2026-01-05 20:07:25.463491387 +0000 UTC m=+132.172675261" watchObservedRunningTime="2026-01-05 20:07:25.467499581 +0000 UTC m=+132.176683455" Jan 05 20:07:25 crc kubenswrapper[4754]: I0105 20:07:25.597586 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" path="/var/lib/kubelet/pods/701a029a-d767-4681-8bf3-dffdc73e93f5/volumes" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.208449 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pg5fc" podStartSLOduration=10.469174946 podStartE2EDuration="1m26.208430217s" podCreationTimestamp="2026-01-05 20:06:01 +0000 UTC" firstStartedPulling="2026-01-05 20:06:09.306605063 +0000 UTC m=+56.015788937" lastFinishedPulling="2026-01-05 20:07:25.045860334 +0000 UTC m=+131.755044208" observedRunningTime="2026-01-05 20:07:25.486377562 +0000 UTC m=+132.195561436" watchObservedRunningTime="2026-01-05 20:07:27.208430217 +0000 UTC m=+133.917614101" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.219438 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-99b25"] Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220113 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.220232 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220353 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="extract-content" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.220449 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="extract-content" Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220539 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3dc040-e872-4c82-a70f-e74d6beb5027" containerName="pruner" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.220607 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3dc040-e872-4c82-a70f-e74d6beb5027" containerName="pruner" Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220678 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.220740 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220806 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.220868 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:07:27 crc kubenswrapper[4754]: E0105 20:07:27.220930 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="extract-utilities" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.222569 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="extract-utilities" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.224676 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="459e0439-ea70-4646-9cf7-2029f79e64b2" containerName="kube-multus-additional-cni-plugins" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.224720 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3dc040-e872-4c82-a70f-e74d6beb5027" containerName="pruner" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.224737 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e76fbc-6ce3-4d0a-b29d-54fc0a5f9f09" containerName="registry-server" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.224758 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="701a029a-d767-4681-8bf3-dffdc73e93f5" containerName="oauth-openshift" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.225573 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.230013 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.230180 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.230265 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.232544 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.238250 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.238662 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.238977 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.239264 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.241890 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.242548 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.244381 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.248286 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-99b25"] Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273150 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273242 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-audit-policies\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273341 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkb6\" (UniqueName: \"kubernetes.io/projected/c82fe254-bc85-4771-b358-017afaff55e9-kube-api-access-8nkb6\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273397 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273458 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273495 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273550 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82fe254-bc85-4771-b358-017afaff55e9-audit-dir\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273584 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273617 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273701 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273736 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.273810 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.274005 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.276545 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.277768 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.284579 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375498 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375578 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375819 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375877 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-audit-policies\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375920 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nkb6\" (UniqueName: \"kubernetes.io/projected/c82fe254-bc85-4771-b358-017afaff55e9-kube-api-access-8nkb6\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.375976 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376026 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376069 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376118 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82fe254-bc85-4771-b358-017afaff55e9-audit-dir\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376148 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376250 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376365 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.376403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.377185 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.377557 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82fe254-bc85-4771-b358-017afaff55e9-audit-dir\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.377582 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-audit-policies\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.378630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.379023 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.383848 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.383933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.384079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.386459 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.387205 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.388332 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.394872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.396879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82fe254-bc85-4771-b358-017afaff55e9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.405097 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nkb6\" (UniqueName: \"kubernetes.io/projected/c82fe254-bc85-4771-b358-017afaff55e9-kube-api-access-8nkb6\") pod \"oauth-openshift-58444664d6-99b25\" (UID: \"c82fe254-bc85-4771-b358-017afaff55e9\") " pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.588059 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:27 crc kubenswrapper[4754]: I0105 20:07:27.849102 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-99b25"] Jan 05 20:07:28 crc kubenswrapper[4754]: I0105 20:07:28.389818 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" event={"ID":"c82fe254-bc85-4771-b358-017afaff55e9","Type":"ContainerStarted","Data":"8f2640d471e48268672de6a66612fd3c0576c69dd0cb3ec068d7028d391ae91e"} Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.396320 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" event={"ID":"c82fe254-bc85-4771-b358-017afaff55e9","Type":"ContainerStarted","Data":"b644439300225a379828b8227d8178a1729687043266ca303a6bf8041c572bde"} Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.397155 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.401266 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.421922 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podStartSLOduration=39.421905523 podStartE2EDuration="39.421905523s" podCreationTimestamp="2026-01-05 20:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:07:29.416935238 +0000 UTC m=+136.126119112" watchObservedRunningTime="2026-01-05 20:07:29.421905523 +0000 UTC m=+136.131089397" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.584193 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.585548 4754 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.585774 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586021 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932" gracePeriod=15 Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586252 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586503 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586487 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d" gracePeriod=15 Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586522 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586638 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586648 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215" gracePeriod=15 Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586704 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0" gracePeriod=15 Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586597 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22" gracePeriod=15 Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586651 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586816 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586827 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586842 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586849 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586859 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586865 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.586876 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.586883 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.587000 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.587012 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.587023 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.587032 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.587039 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.600006 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.628333 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706702 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706776 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706817 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706878 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706898 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.706923 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.707052 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809220 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809257 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809347 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809379 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809364 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809413 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809405 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809454 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809362 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809523 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.809617 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: I0105 20:07:29.927390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:07:29 crc kubenswrapper[4754]: W0105 20:07:29.958883 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5b2debd90724e861466dff5c34cdb75ff37c3487074ce09276d9e508129a37f4 WatchSource:0}: Error finding container 5b2debd90724e861466dff5c34cdb75ff37c3487074ce09276d9e508129a37f4: Status 404 returned error can't find the container with id 5b2debd90724e861466dff5c34cdb75ff37c3487074ce09276d9e508129a37f4 Jan 05 20:07:29 crc kubenswrapper[4754]: E0105 20:07:29.964984 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1887ee88c58d5a8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 20:07:29.963547276 +0000 UTC m=+136.672731150,LastTimestamp:2026-01-05 20:07:29.963547276 +0000 UTC m=+136.672731150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 20:07:30 crc kubenswrapper[4754]: I0105 20:07:30.407856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5b2debd90724e861466dff5c34cdb75ff37c3487074ce09276d9e508129a37f4"} Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.142348 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.142451 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.230749 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.231614 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.232366 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.363403 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.363496 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.426097 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.427536 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.428056 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.428789 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.471117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.472047 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.472718 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.473234 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.494086 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.494795 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.495405 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.495853 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.776581 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.776884 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.838251 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.838714 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.838974 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.839279 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.839537 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.847450 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.847515 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.892873 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.893452 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.893858 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.894181 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.894581 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:31 crc kubenswrapper[4754]: I0105 20:07:31.894840 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.426664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608"} Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.429439 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.429771 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.430139 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.430459 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.430782 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.431873 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.432686 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d" exitCode=0 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.432722 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215" exitCode=0 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.432735 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22" exitCode=0 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.432748 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0" exitCode=2 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.432758 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932" exitCode=0 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.436070 4754 generic.go:334] "Generic (PLEG): container finished" podID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" containerID="9a5fd205655231e0eeb0541b0c198f81e3637fcf4fab4f1347d21168e97d1bd1" exitCode=0 Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.436187 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"850cae84-31d6-43c8-a3b8-9ebb3e5b158a","Type":"ContainerDied","Data":"9a5fd205655231e0eeb0541b0c198f81e3637fcf4fab4f1347d21168e97d1bd1"} Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.437874 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.438118 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.438367 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.438645 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.439012 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.439344 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.486830 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.487407 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.487756 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.488069 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.488387 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.488690 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.489050 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.510623 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.511001 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.511312 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.511551 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.511769 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.511996 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.512209 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.784679 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.785458 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.785931 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.786518 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.786813 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.787096 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.787333 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.787542 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.787922 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858145 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858208 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858245 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858435 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858497 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858519 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858577 4754 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858594 4754 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:32 crc kubenswrapper[4754]: I0105 20:07:32.858603 4754 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.389679 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.389738 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.428965 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.429948 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.430435 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.430925 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.431250 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.431534 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.431811 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.432054 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.432428 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.445309 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.446468 4754 scope.go:117] "RemoveContainer" containerID="c7292a301bdffd4a5f2a0aa5b9a6e4342a475c800e1a85b3a2587787e6fd533d" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.446617 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.463562 4754 scope.go:117] "RemoveContainer" containerID="bd54cc7c38e5bba32485b4473fdd72a5a0c5b372f1d7db0cece913e8e409d215" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.465514 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.466056 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.466353 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.467038 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.467483 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.467751 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.468055 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.468377 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.483730 4754 scope.go:117] "RemoveContainer" containerID="38a03ee8953af50bf2fff5b7105068d2d7d842cdf0c574391a4d54749b01ca22" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.500210 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.500678 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.500843 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.501094 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.501443 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.501789 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.502487 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.502778 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.503030 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.506253 4754 scope.go:117] "RemoveContainer" containerID="390bcbe50677ac4c8cb350aeafb60830c75290b73e7b53253b80159acfd961c0" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.520207 4754 scope.go:117] "RemoveContainer" containerID="059ef5db0bf79ee2cf729934e418a13b36a27a29d42a390f7350213a9198f932" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.537869 4754 scope.go:117] "RemoveContainer" containerID="c23530b438be3aca6b0fe35d3db82053f6657a6dc82dce88febba5f987835908" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.591439 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.591914 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.592138 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.592341 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.592545 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.592784 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.593505 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.593770 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.595284 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.702694 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.703525 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.703981 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.704380 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.704724 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.705141 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.705492 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.705905 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.747345 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.747415 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771157 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access\") pod \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771259 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir\") pod \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771331 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "850cae84-31d6-43c8-a3b8-9ebb3e5b158a" (UID: "850cae84-31d6-43c8-a3b8-9ebb3e5b158a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771387 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock\") pod \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\" (UID: \"850cae84-31d6-43c8-a3b8-9ebb3e5b158a\") " Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771475 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock" (OuterVolumeSpecName: "var-lock") pod "850cae84-31d6-43c8-a3b8-9ebb3e5b158a" (UID: "850cae84-31d6-43c8-a3b8-9ebb3e5b158a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771879 4754 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.771907 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.779479 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "850cae84-31d6-43c8-a3b8-9ebb3e5b158a" (UID: "850cae84-31d6-43c8-a3b8-9ebb3e5b158a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.809973 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.810480 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.810764 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.811102 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.811491 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.811907 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.812275 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.812628 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.812934 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:33 crc kubenswrapper[4754]: I0105 20:07:33.873528 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/850cae84-31d6-43c8-a3b8-9ebb3e5b158a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.390425 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.390507 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.450941 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.451717 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.452203 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.452939 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.453927 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.454252 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.454612 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.454881 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.454939 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"850cae84-31d6-43c8-a3b8-9ebb3e5b158a","Type":"ContainerDied","Data":"caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e"} Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.454997 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa1cd12e7437c8cc96d5512d1cc4df22c276e15e09f891e8a0b2994675d065e" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.455012 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.455502 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.456041 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.474905 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.476124 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.477158 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.477834 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.478487 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.479099 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.479720 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.480140 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.480543 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.529217 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.529638 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.529944 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.530502 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.531063 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.531786 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.532137 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.532555 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.533042 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.533394 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.533755 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.534171 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.534612 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.534939 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.535276 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.535784 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.536286 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.536743 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.537086 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:34 crc kubenswrapper[4754]: I0105 20:07:34.537484 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.608732 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.609671 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.610262 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.610689 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.611106 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:35 crc kubenswrapper[4754]: I0105 20:07:35.611345 4754 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.612020 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Jan 05 20:07:35 crc kubenswrapper[4754]: E0105 20:07:35.813442 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Jan 05 20:07:36 crc kubenswrapper[4754]: E0105 20:07:36.214555 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Jan 05 20:07:37 crc kubenswrapper[4754]: E0105 20:07:37.016334 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Jan 05 20:07:38 crc kubenswrapper[4754]: E0105 20:07:38.618267 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.019270 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1887ee88c58d5a8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 20:07:29.963547276 +0000 UTC m=+136.672731150,LastTimestamp:2026-01-05 20:07:29.963547276 +0000 UTC m=+136.672731150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.099236 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:07:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:07:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:07:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T20:07:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:aa84cbf6dc79a70b52ba45e02e7b7088c82ff7b1ecb7bb5450f90689046e1e38\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:b74fc97d6f275246717f2960ed13442b8e46b9ef406b2855482ecb7ac875bab0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1655623012},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1231022428},{\\\"names\\\":[],\\\"sizeBytes\\\":1203987286},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.100333 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.100906 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.101529 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.101984 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:39 crc kubenswrapper[4754]: E0105 20:07:39.102036 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 20:07:41 crc kubenswrapper[4754]: E0105 20:07:41.820989 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.588365 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.590470 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.590991 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.591751 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.592319 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.592846 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.593280 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.593862 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.594167 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.595262 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.610818 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.610879 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:42 crc kubenswrapper[4754]: E0105 20:07:42.611710 4754 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:42 crc kubenswrapper[4754]: I0105 20:07:42.612564 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.515422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc81d276d8f0f1a338d8b9542a5e6bcc2149bdf7783d14f73e8d0960102c4e26"} Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.593866 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.594380 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.594959 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.595228 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.595591 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.595937 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.596242 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.596639 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.596969 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:43 crc kubenswrapper[4754]: I0105 20:07:43.597191 4754 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.533353 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.533770 4754 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360" exitCode=1 Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.533870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360"} Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.534685 4754 scope.go:117] "RemoveContainer" containerID="6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.535083 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.536052 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.536689 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.536893 4754 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a1e15710c3972443869c79b6d7115bd6a293955df2b2cd7810b9a1bdeb3d02e9" exitCode=0 Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.536977 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a1e15710c3972443869c79b6d7115bd6a293955df2b2cd7810b9a1bdeb3d02e9"} Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.537108 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.537422 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.537448 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.537555 4754 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.537911 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: E0105 20:07:45.538002 4754 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.539384 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.540090 4754 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.540987 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.541832 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.542396 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.543142 4754 status_manager.go:851] "Failed to get status for pod" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" pod="openshift-marketplace/community-operators-qh2hz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qh2hz\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.543803 4754 status_manager.go:851] "Failed to get status for pod" podUID="31687145-8349-44b0-9e77-de73e4738916" pod="openshift-marketplace/redhat-operators-z52n8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z52n8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.544169 4754 status_manager.go:851] "Failed to get status for pod" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" pod="openshift-marketplace/certified-operators-pg5fc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pg5fc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.544708 4754 status_manager.go:851] "Failed to get status for pod" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" pod="openshift-marketplace/redhat-marketplace-mq9f8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mq9f8\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.545280 4754 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.545822 4754 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.546533 4754 status_manager.go:851] "Failed to get status for pod" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" pod="openshift-marketplace/community-operators-xm8jv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xm8jv\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.546957 4754 status_manager.go:851] "Failed to get status for pod" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" pod="openshift-marketplace/redhat-marketplace-f6htm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-f6htm\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.547397 4754 status_manager.go:851] "Failed to get status for pod" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.547820 4754 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.548222 4754 status_manager.go:851] "Failed to get status for pod" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" pod="openshift-marketplace/certified-operators-4zc9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4zc9h\": dial tcp 38.102.83.201:6443: connect: connection refused" Jan 05 20:07:45 crc kubenswrapper[4754]: I0105 20:07:45.582837 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:07:46 crc kubenswrapper[4754]: I0105 20:07:46.544893 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 20:07:46 crc kubenswrapper[4754]: I0105 20:07:46.545015 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16c003911fd82847b695b228c89924fa27dc474cc73f3e4467a06b6f94d59934"} Jan 05 20:07:46 crc kubenswrapper[4754]: I0105 20:07:46.547956 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a96885994317ee903e2cd6e3b6a28e495903186ac353f3005e9f5b46a9400cc"} Jan 05 20:07:46 crc kubenswrapper[4754]: I0105 20:07:46.547991 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9a1f832d86bd41be45b6d8f74eeb7456a9f354a284ea025688b581f95125243d"} Jan 05 20:07:46 crc kubenswrapper[4754]: I0105 20:07:46.548000 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"abff2504658e727b22f1bd817bc128e5bbc5ef9301cfe14c76b5e8637dbf1504"} Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.286448 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.556267 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e7947f76a738dc72f8352ad318b8fb8518cdf71f51002485483f0af79bee6a81"} Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.556610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c943e7e9d4079c171d07e5462d174256fa511cfaa803fffc9b09b2eaa8f0ef67"} Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.556620 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.556638 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.612752 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.612797 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.618661 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]log ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]etcd ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/generic-apiserver-start-informers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-filter ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-apiextensions-informers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-apiextensions-controllers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/crd-informer-synced ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-system-namespaces-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 05 20:07:47 crc kubenswrapper[4754]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/bootstrap-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/start-kube-aggregator-informers ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-registration-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-discovery-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]autoregister-completion ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-openapi-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 05 20:07:47 crc kubenswrapper[4754]: livez check failed Jan 05 20:07:47 crc kubenswrapper[4754]: I0105 20:07:47.618708 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 20:07:48 crc kubenswrapper[4754]: I0105 20:07:48.109867 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:07:48 crc kubenswrapper[4754]: I0105 20:07:48.109950 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.567487 4754 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.588934 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.588974 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.589392 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.620393 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:52 crc kubenswrapper[4754]: I0105 20:07:52.623138 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5827a2a2-db60-40b4-9fb6-748710118bc9" Jan 05 20:07:53 crc kubenswrapper[4754]: I0105 20:07:53.559469 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:07:53 crc kubenswrapper[4754]: I0105 20:07:53.565794 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:07:53 crc kubenswrapper[4754]: I0105 20:07:53.608567 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:53 crc kubenswrapper[4754]: I0105 20:07:53.608612 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:54 crc kubenswrapper[4754]: I0105 20:07:54.603509 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:54 crc kubenswrapper[4754]: I0105 20:07:54.603551 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:54 crc kubenswrapper[4754]: I0105 20:07:54.609947 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:07:55 crc kubenswrapper[4754]: I0105 20:07:55.607727 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:55 crc kubenswrapper[4754]: I0105 20:07:55.608017 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:07:57 crc kubenswrapper[4754]: I0105 20:07:57.292966 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 20:08:01 crc kubenswrapper[4754]: I0105 20:08:01.759706 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 20:08:01 crc kubenswrapper[4754]: I0105 20:08:01.834972 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.156869 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.285711 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.355322 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.619183 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.620099 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.620222 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a69bbae-22d1-4837-a7d0-d1f6ee5f8659" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.627388 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.638508 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 20:08:02 crc kubenswrapper[4754]: I0105 20:08:02.995052 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.143400 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.392666 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.603751 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.631009 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.638940 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5827a2a2-db60-40b4-9fb6-748710118bc9" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.726695 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 20:08:03 crc kubenswrapper[4754]: I0105 20:08:03.929998 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.083741 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.086037 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.107684 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.341978 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.493496 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.722653 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.751793 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.768891 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 20:08:04 crc kubenswrapper[4754]: I0105 20:08:04.973746 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.024711 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.034609 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.172759 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.286836 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.376671 4754 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.824405 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 20:08:05 crc kubenswrapper[4754]: I0105 20:08:05.883286 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.021460 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.021568 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.027201 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.195897 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.214132 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.277769 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.280248 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.307765 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.392489 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.466431 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.494752 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.657947 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 20:08:06 crc kubenswrapper[4754]: I0105 20:08:06.781989 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.019168 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.141274 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.312926 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.313142 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.409108 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.432641 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.602571 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.777434 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.778286 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.806536 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.836607 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.843657 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.867205 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.869927 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.880400 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.921087 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 20:08:07 crc kubenswrapper[4754]: I0105 20:08:07.963190 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.019058 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.056553 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.106930 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.111387 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.141387 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.141546 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.149018 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.164105 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.449115 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.464882 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.604187 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.643232 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.686477 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.723313 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.776435 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.799255 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.885427 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.886706 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.904763 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 20:08:08 crc kubenswrapper[4754]: I0105 20:08:08.926214 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.057905 4754 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.204410 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.301522 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.301651 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.373607 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.397926 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.464061 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.488323 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.493985 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.511606 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.522148 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.565567 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.620785 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.692584 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.707865 4754 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.766771 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.846380 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.924631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 20:08:09 crc kubenswrapper[4754]: I0105 20:08:09.972632 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.004095 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.044766 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.074959 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.103573 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.120619 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.136784 4754 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.142881 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.142855299 podStartE2EDuration="41.142855299s" podCreationTimestamp="2026-01-05 20:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:07:52.446907841 +0000 UTC m=+159.156091715" watchObservedRunningTime="2026-01-05 20:08:10.142855299 +0000 UTC m=+176.852039183" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.144159 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.144214 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.169010 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.168976178 podStartE2EDuration="18.168976178s" podCreationTimestamp="2026-01-05 20:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:08:10.168426413 +0000 UTC m=+176.877610317" watchObservedRunningTime="2026-01-05 20:08:10.168976178 +0000 UTC m=+176.878160102" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.199722 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.237709 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.302902 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.341178 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.354546 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.434497 4754 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.459182 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.470333 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.674354 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.676048 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.699796 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.835022 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 20:08:10 crc kubenswrapper[4754]: I0105 20:08:10.909201 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.006735 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.048993 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.089156 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.143202 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.170463 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.178738 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.193149 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.251271 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.351664 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.439583 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.569349 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.644520 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.657079 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.669322 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.943611 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 20:08:11 crc kubenswrapper[4754]: I0105 20:08:11.954734 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.024442 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.042636 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.063071 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.063468 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.074187 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.180724 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.247852 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.266142 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.272281 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.274260 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.289575 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.301279 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.306807 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.308793 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.321938 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.375128 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.500188 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.553334 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.680090 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.705568 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.709170 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.717535 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.742251 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.747859 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.880122 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 20:08:12 crc kubenswrapper[4754]: I0105 20:08:12.951128 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.092235 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.104713 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.171345 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.197171 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.208863 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.212302 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.214422 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.220266 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.349158 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.375655 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.503154 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.550477 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.608519 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.629118 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.635340 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.660787 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.670626 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.785811 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.808075 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.836183 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.845075 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.859686 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.865937 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.961390 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 20:08:13 crc kubenswrapper[4754]: I0105 20:08:13.980390 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.018011 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.035986 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.072587 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.112321 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.113844 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.197878 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.302556 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.303496 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.358026 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.371469 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.406275 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.614901 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.645817 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.753577 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.800579 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.881018 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.881435 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.948335 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.966002 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.969767 4754 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 20:08:14 crc kubenswrapper[4754]: I0105 20:08:14.970288 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608" gracePeriod=5 Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.024016 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.071003 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.173490 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.252237 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.269054 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.303985 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.438596 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.491418 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.643572 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.691460 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.691730 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.749652 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.795646 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.804079 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.843986 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 20:08:15 crc kubenswrapper[4754]: I0105 20:08:15.971866 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.089701 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.117695 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.286002 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.463453 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.480124 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.492956 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.516966 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.615667 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.621122 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.803131 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 20:08:16 crc kubenswrapper[4754]: I0105 20:08:16.836817 4754 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.084568 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.184606 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.202540 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.207880 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.266422 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.310635 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.392913 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.404256 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.451736 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.464067 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.564583 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.707109 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 20:08:17 crc kubenswrapper[4754]: I0105 20:08:17.839910 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.109608 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.109704 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.186572 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.514252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.556013 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 20:08:18 crc kubenswrapper[4754]: I0105 20:08:18.662378 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 20:08:19 crc kubenswrapper[4754]: I0105 20:08:19.055717 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.581512 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.581664 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.692451 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.692751 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.692971 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.693264 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.693496 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.692560 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.692851 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.693382 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.693580 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.694404 4754 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.694573 4754 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.694708 4754 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.694834 4754 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.705904 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.790941 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.791029 4754 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608" exitCode=137 Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.791088 4754 scope.go:117] "RemoveContainer" containerID="854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.791188 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.796984 4754 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.815150 4754 scope.go:117] "RemoveContainer" containerID="854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608" Jan 05 20:08:20 crc kubenswrapper[4754]: E0105 20:08:20.815912 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608\": container with ID starting with 854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608 not found: ID does not exist" containerID="854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608" Jan 05 20:08:20 crc kubenswrapper[4754]: I0105 20:08:20.815986 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608"} err="failed to get container status \"854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608\": rpc error: code = NotFound desc = could not find container \"854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608\": container with ID starting with 854c952394bbd16261d2abf80b9b7d3425ae9285c4702f8ce095496946efc608 not found: ID does not exist" Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.596189 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.596702 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.610624 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.610687 4754 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="82823181-83cf-43d2-86ad-7d03a5d4ca6d" Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.615570 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 20:08:21 crc kubenswrapper[4754]: I0105 20:08:21.615611 4754 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="82823181-83cf-43d2-86ad-7d03a5d4ca6d" Jan 05 20:08:31 crc kubenswrapper[4754]: I0105 20:08:31.357901 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 20:08:32 crc kubenswrapper[4754]: I0105 20:08:32.886203 4754 generic.go:334] "Generic (PLEG): container finished" podID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerID="92a6ca4cb09d04d61122813085a0c01187e7196c4f8e8b79a7ba3f807431bf6a" exitCode=0 Jan 05 20:08:32 crc kubenswrapper[4754]: I0105 20:08:32.886309 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerDied","Data":"92a6ca4cb09d04d61122813085a0c01187e7196c4f8e8b79a7ba3f807431bf6a"} Jan 05 20:08:32 crc kubenswrapper[4754]: I0105 20:08:32.887053 4754 scope.go:117] "RemoveContainer" containerID="92a6ca4cb09d04d61122813085a0c01187e7196c4f8e8b79a7ba3f807431bf6a" Jan 05 20:08:33 crc kubenswrapper[4754]: I0105 20:08:33.893911 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerStarted","Data":"13a34190694c6477f1b1a29042c02258d63ea33c2266ce809da66651abcfa7e3"} Jan 05 20:08:33 crc kubenswrapper[4754]: I0105 20:08:33.895056 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:08:33 crc kubenswrapper[4754]: I0105 20:08:33.898181 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:08:42 crc kubenswrapper[4754]: I0105 20:08:42.674453 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.064950 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.065209 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" podUID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" containerName="controller-manager" containerID="cri-o://ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32" gracePeriod=30 Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.190948 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.191763 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" podUID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" containerName="route-controller-manager" containerID="cri-o://8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7" gracePeriod=30 Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.487116 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.571282 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles\") pod \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.571339 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config\") pod \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.571397 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7wgc\" (UniqueName: \"kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc\") pod \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.571606 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca\") pod \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.571641 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert\") pod \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\" (UID: \"fd7afa25-8cba-4be3-a6d7-1b30d7adf834\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.572197 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd7afa25-8cba-4be3-a6d7-1b30d7adf834" (UID: "fd7afa25-8cba-4be3-a6d7-1b30d7adf834"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.572245 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd7afa25-8cba-4be3-a6d7-1b30d7adf834" (UID: "fd7afa25-8cba-4be3-a6d7-1b30d7adf834"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.572270 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config" (OuterVolumeSpecName: "config") pod "fd7afa25-8cba-4be3-a6d7-1b30d7adf834" (UID: "fd7afa25-8cba-4be3-a6d7-1b30d7adf834"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.576546 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd7afa25-8cba-4be3-a6d7-1b30d7adf834" (UID: "fd7afa25-8cba-4be3-a6d7-1b30d7adf834"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.583457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc" (OuterVolumeSpecName: "kube-api-access-v7wgc") pod "fd7afa25-8cba-4be3-a6d7-1b30d7adf834" (UID: "fd7afa25-8cba-4be3-a6d7-1b30d7adf834"). InnerVolumeSpecName "kube-api-access-v7wgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.674550 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7wgc\" (UniqueName: \"kubernetes.io/projected/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-kube-api-access-v7wgc\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.674858 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.674870 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.674882 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.674893 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7afa25-8cba-4be3-a6d7-1b30d7adf834-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.887002 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.979253 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config\") pod \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.979485 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh9j4\" (UniqueName: \"kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4\") pod \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.979590 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca\") pod \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.979684 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert\") pod \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\" (UID: \"b2c91790-e7dd-4391-a68b-f5a4a052ca72\") " Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.980617 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config" (OuterVolumeSpecName: "config") pod "b2c91790-e7dd-4391-a68b-f5a4a052ca72" (UID: "b2c91790-e7dd-4391-a68b-f5a4a052ca72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.980777 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2c91790-e7dd-4391-a68b-f5a4a052ca72" (UID: "b2c91790-e7dd-4391-a68b-f5a4a052ca72"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.985941 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4" (OuterVolumeSpecName: "kube-api-access-mh9j4") pod "b2c91790-e7dd-4391-a68b-f5a4a052ca72" (UID: "b2c91790-e7dd-4391-a68b-f5a4a052ca72"). InnerVolumeSpecName "kube-api-access-mh9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.986175 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2c91790-e7dd-4391-a68b-f5a4a052ca72" (UID: "b2c91790-e7dd-4391-a68b-f5a4a052ca72"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.986954 4754 generic.go:334] "Generic (PLEG): container finished" podID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" containerID="ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32" exitCode=0 Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.987014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" event={"ID":"fd7afa25-8cba-4be3-a6d7-1b30d7adf834","Type":"ContainerDied","Data":"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32"} Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.987082 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.987112 4754 scope.go:117] "RemoveContainer" containerID="ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32" Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.987091 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-srhjf" event={"ID":"fd7afa25-8cba-4be3-a6d7-1b30d7adf834","Type":"ContainerDied","Data":"60d8b3d590f3e087162549fea5b822a6a97fd2e7735ea7da1b5c3dbb533e7534"} Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.992585 4754 generic.go:334] "Generic (PLEG): container finished" podID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" containerID="8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7" exitCode=0 Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.992663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" event={"ID":"b2c91790-e7dd-4391-a68b-f5a4a052ca72","Type":"ContainerDied","Data":"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7"} Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.992719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" event={"ID":"b2c91790-e7dd-4391-a68b-f5a4a052ca72","Type":"ContainerDied","Data":"ee33f65ea8883ec29f42ab26a27863a837bfff15ef579edd659d588d6958f1ee"} Jan 05 20:08:44 crc kubenswrapper[4754]: I0105 20:08:44.992956 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.017217 4754 scope.go:117] "RemoveContainer" containerID="ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32" Jan 05 20:08:45 crc kubenswrapper[4754]: E0105 20:08:45.017868 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32\": container with ID starting with ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32 not found: ID does not exist" containerID="ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.017925 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32"} err="failed to get container status \"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32\": rpc error: code = NotFound desc = could not find container \"ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32\": container with ID starting with ddc76f810a461fb7bb1f305823c70324616de7ba87cb9fe6aaaa3063ef785c32 not found: ID does not exist" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.017960 4754 scope.go:117] "RemoveContainer" containerID="8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.053651 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.058448 4754 scope.go:117] "RemoveContainer" containerID="8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7" Jan 05 20:08:45 crc kubenswrapper[4754]: E0105 20:08:45.059007 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7\": container with ID starting with 8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7 not found: ID does not exist" containerID="8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.059076 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7"} err="failed to get container status \"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7\": rpc error: code = NotFound desc = could not find container \"8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7\": container with ID starting with 8ac47bf136ee8278e1c380b678b84ccf158469060a858e76c06a9145c2a54ac7 not found: ID does not exist" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.063924 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-srhjf"] Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.068644 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.077081 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8jjnh"] Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.082263 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.082353 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c91790-e7dd-4391-a68b-f5a4a052ca72-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.082413 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c91790-e7dd-4391-a68b-f5a4a052ca72-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.082437 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh9j4\" (UniqueName: \"kubernetes.io/projected/b2c91790-e7dd-4391-a68b-f5a4a052ca72-kube-api-access-mh9j4\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.603691 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" path="/var/lib/kubelet/pods/b2c91790-e7dd-4391-a68b-f5a4a052ca72/volumes" Jan 05 20:08:45 crc kubenswrapper[4754]: I0105 20:08:45.604771 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" path="/var/lib/kubelet/pods/fd7afa25-8cba-4be3-a6d7-1b30d7adf834/volumes" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.262722 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:08:46 crc kubenswrapper[4754]: E0105 20:08:46.263477 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263504 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 20:08:46 crc kubenswrapper[4754]: E0105 20:08:46.263537 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" containerName="installer" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263551 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" containerName="installer" Jan 05 20:08:46 crc kubenswrapper[4754]: E0105 20:08:46.263573 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" containerName="route-controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263586 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" containerName="route-controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: E0105 20:08:46.263606 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" containerName="controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263619 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" containerName="controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263802 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c91790-e7dd-4391-a68b-f5a4a052ca72" containerName="route-controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263821 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263841 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="850cae84-31d6-43c8-a3b8-9ebb3e5b158a" containerName="installer" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.263861 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7afa25-8cba-4be3-a6d7-1b30d7adf834" containerName="controller-manager" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.264994 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.268918 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.269394 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.269541 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.269572 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.269603 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.269407 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.274563 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.276878 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.285672 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.285956 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.286149 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.286152 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.286631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.286920 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.293188 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.294848 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.331568 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.403997 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzz8t\" (UniqueName: \"kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404160 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404191 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vdv\" (UniqueName: \"kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404355 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404389 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404569 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404706 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.404927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.405009 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506233 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506344 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vdv\" (UniqueName: \"kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506389 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506424 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506472 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506539 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506603 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506644 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.506703 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzz8t\" (UniqueName: \"kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.507953 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.508671 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.509515 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.510092 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.510265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.522772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.522872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.535729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzz8t\" (UniqueName: \"kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t\") pod \"route-controller-manager-59b55d474f-z77ks\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.536573 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vdv\" (UniqueName: \"kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv\") pod \"controller-manager-5947857f89-7clr6\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.622687 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.636463 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.855606 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:08:46 crc kubenswrapper[4754]: I0105 20:08:46.898848 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.008675 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" event={"ID":"5e3066e3-5277-4bc9-b96b-360fdd4f696f","Type":"ContainerStarted","Data":"1cab8c42f2612c63ea1efa3d1967edaa6f7d85d852b27923e978796154b74504"} Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.008744 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" event={"ID":"5e3066e3-5277-4bc9-b96b-360fdd4f696f","Type":"ContainerStarted","Data":"9055bbb2a3f4c6fdfcfb3f2778b29413e349b7fd776816c02f73fa61a5b9f9f2"} Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.008930 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.011447 4754 patch_prober.go:28] interesting pod/route-controller-manager-59b55d474f-z77ks container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.011480 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.028703 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" event={"ID":"f6729d35-8afb-467a-8b7e-f0c3a5f51648","Type":"ContainerStarted","Data":"1f9c168696e650a7ebf775e8e54a4e52037ed266c02b8351bba343dba4cf1c29"} Jan 05 20:08:47 crc kubenswrapper[4754]: I0105 20:08:47.212955 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.040309 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" event={"ID":"f6729d35-8afb-467a-8b7e-f0c3a5f51648","Type":"ContainerStarted","Data":"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f"} Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.040777 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.045872 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.045950 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.061901 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" podStartSLOduration=4.061884604 podStartE2EDuration="4.061884604s" podCreationTimestamp="2026-01-05 20:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:08:48.060307879 +0000 UTC m=+214.769491763" watchObservedRunningTime="2026-01-05 20:08:48.061884604 +0000 UTC m=+214.771068498" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.064173 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" podStartSLOduration=4.06415994 podStartE2EDuration="4.06415994s" podCreationTimestamp="2026-01-05 20:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:08:47.037880341 +0000 UTC m=+213.747064215" watchObservedRunningTime="2026-01-05 20:08:48.06415994 +0000 UTC m=+214.773343854" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.109328 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.109392 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.109440 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.110034 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:08:48 crc kubenswrapper[4754]: I0105 20:08:48.110100 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c" gracePeriod=600 Jan 05 20:08:49 crc kubenswrapper[4754]: I0105 20:08:49.052604 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c" exitCode=0 Jan 05 20:08:49 crc kubenswrapper[4754]: I0105 20:08:49.052738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c"} Jan 05 20:08:49 crc kubenswrapper[4754]: I0105 20:08:49.053952 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91"} Jan 05 20:08:53 crc kubenswrapper[4754]: I0105 20:08:53.669343 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:08:53 crc kubenswrapper[4754]: I0105 20:08:53.670187 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zc9h" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="registry-server" containerID="cri-o://cf581f438214eb5f36800db1bd5ea29d362b643bfdacc5977eaf2617d7d4bd38" gracePeriod=2 Jan 05 20:08:53 crc kubenswrapper[4754]: I0105 20:08:53.866615 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:08:53 crc kubenswrapper[4754]: I0105 20:08:53.867121 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xm8jv" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="registry-server" containerID="cri-o://c529683b78117396fd1b24fe5d7425331d40f28b4cc8473a906634c62eabacaf" gracePeriod=2 Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.082686 4754 generic.go:334] "Generic (PLEG): container finished" podID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerID="cf581f438214eb5f36800db1bd5ea29d362b643bfdacc5977eaf2617d7d4bd38" exitCode=0 Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.082772 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerDied","Data":"cf581f438214eb5f36800db1bd5ea29d362b643bfdacc5977eaf2617d7d4bd38"} Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.084083 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerID="c529683b78117396fd1b24fe5d7425331d40f28b4cc8473a906634c62eabacaf" exitCode=0 Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.084111 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerDied","Data":"c529683b78117396fd1b24fe5d7425331d40f28b4cc8473a906634c62eabacaf"} Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.153975 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.306169 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.324348 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vkp5\" (UniqueName: \"kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5\") pod \"9653b33c-3e18-4d7f-81ed-febff4a00a35\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.324473 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities\") pod \"9653b33c-3e18-4d7f-81ed-febff4a00a35\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.324509 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content\") pod \"9653b33c-3e18-4d7f-81ed-febff4a00a35\" (UID: \"9653b33c-3e18-4d7f-81ed-febff4a00a35\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.326758 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities" (OuterVolumeSpecName: "utilities") pod "9653b33c-3e18-4d7f-81ed-febff4a00a35" (UID: "9653b33c-3e18-4d7f-81ed-febff4a00a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.333420 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5" (OuterVolumeSpecName: "kube-api-access-4vkp5") pod "9653b33c-3e18-4d7f-81ed-febff4a00a35" (UID: "9653b33c-3e18-4d7f-81ed-febff4a00a35"). InnerVolumeSpecName "kube-api-access-4vkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.383666 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9653b33c-3e18-4d7f-81ed-febff4a00a35" (UID: "9653b33c-3e18-4d7f-81ed-febff4a00a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.427714 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content\") pod \"ba326117-1dcc-4468-80bb-a54b9cc83c01\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.428040 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwg4j\" (UniqueName: \"kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j\") pod \"ba326117-1dcc-4468-80bb-a54b9cc83c01\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.428164 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities\") pod \"ba326117-1dcc-4468-80bb-a54b9cc83c01\" (UID: \"ba326117-1dcc-4468-80bb-a54b9cc83c01\") " Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.428674 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vkp5\" (UniqueName: \"kubernetes.io/projected/9653b33c-3e18-4d7f-81ed-febff4a00a35-kube-api-access-4vkp5\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.428795 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.428985 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9653b33c-3e18-4d7f-81ed-febff4a00a35-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.429414 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities" (OuterVolumeSpecName: "utilities") pod "ba326117-1dcc-4468-80bb-a54b9cc83c01" (UID: "ba326117-1dcc-4468-80bb-a54b9cc83c01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.435367 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j" (OuterVolumeSpecName: "kube-api-access-jwg4j") pod "ba326117-1dcc-4468-80bb-a54b9cc83c01" (UID: "ba326117-1dcc-4468-80bb-a54b9cc83c01"). InnerVolumeSpecName "kube-api-access-jwg4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.496568 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba326117-1dcc-4468-80bb-a54b9cc83c01" (UID: "ba326117-1dcc-4468-80bb-a54b9cc83c01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.501766 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cv74d"] Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.501982 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="extract-content" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502010 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="extract-content" Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.502025 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="extract-utilities" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502034 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="extract-utilities" Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.502049 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502056 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.502071 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502076 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.502088 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="extract-utilities" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502095 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="extract-utilities" Jan 05 20:08:54 crc kubenswrapper[4754]: E0105 20:08:54.502107 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="extract-content" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502114 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="extract-content" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502223 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502235 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" containerName="registry-server" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.502583 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.514271 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cv74d"] Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.529957 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.529986 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwg4j\" (UniqueName: \"kubernetes.io/projected/ba326117-1dcc-4468-80bb-a54b9cc83c01-kube-api-access-jwg4j\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.529997 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba326117-1dcc-4468-80bb-a54b9cc83c01-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630692 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630752 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-bound-sa-token\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630823 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-certificates\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630849 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-trusted-ca\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630875 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.630893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtqg\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-kube-api-access-qdtqg\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.631068 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-tls\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.656908 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-trusted-ca\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732383 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732402 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtqg\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-kube-api-access-qdtqg\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-tls\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732474 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732492 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-bound-sa-token\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.732552 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-certificates\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.733075 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.733942 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-trusted-ca\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.733949 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-certificates\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.736268 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.736643 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-registry-tls\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.750872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-bound-sa-token\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.759075 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtqg\" (UniqueName: \"kubernetes.io/projected/3db77f8c-170e-42a1-a6a5-ae5188f0c8d2-kube-api-access-qdtqg\") pod \"image-registry-66df7c8f76-cv74d\" (UID: \"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:54 crc kubenswrapper[4754]: I0105 20:08:54.823775 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.091250 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xm8jv" event={"ID":"ba326117-1dcc-4468-80bb-a54b9cc83c01","Type":"ContainerDied","Data":"a2b600ad7e902a5a62d1c4b2f3e50b801a2eb1021d957a73a2759953ef249690"} Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.091514 4754 scope.go:117] "RemoveContainer" containerID="c529683b78117396fd1b24fe5d7425331d40f28b4cc8473a906634c62eabacaf" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.091328 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xm8jv" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.093701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zc9h" event={"ID":"9653b33c-3e18-4d7f-81ed-febff4a00a35","Type":"ContainerDied","Data":"283542987a1d067b8ae6231b4401d6384a4a94c1df884164e15729007c4f6dae"} Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.093776 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zc9h" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.107740 4754 scope.go:117] "RemoveContainer" containerID="68754d24d7aa8791d398b0a33e3fafa095e1bf742e62dae8f66b427ba4b6955c" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.121591 4754 scope.go:117] "RemoveContainer" containerID="666799136ac8acdd79d8df3a956e3b44b69494a9c01f1c44097dc5a582ca0d36" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.132670 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.135955 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xm8jv"] Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.146928 4754 scope.go:117] "RemoveContainer" containerID="cf581f438214eb5f36800db1bd5ea29d362b643bfdacc5977eaf2617d7d4bd38" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.147004 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.153277 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zc9h"] Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.163452 4754 scope.go:117] "RemoveContainer" containerID="dddcac1861aa8081d01dae8f264d8effd866804a0c12d14f499918ec510f8c8c" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.183257 4754 scope.go:117] "RemoveContainer" containerID="b3b58ba5579014bf4b7259076c99c989cc2a7a5af25685ffac3c218e68c0a2cc" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.243948 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cv74d"] Jan 05 20:08:55 crc kubenswrapper[4754]: W0105 20:08:55.248321 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3db77f8c_170e_42a1_a6a5_ae5188f0c8d2.slice/crio-d978ffaf6d078239a9f9a5f5dcf5a58c272a2a475749e475a5c3b8dbbea07258 WatchSource:0}: Error finding container d978ffaf6d078239a9f9a5f5dcf5a58c272a2a475749e475a5c3b8dbbea07258: Status 404 returned error can't find the container with id d978ffaf6d078239a9f9a5f5dcf5a58c272a2a475749e475a5c3b8dbbea07258 Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.597946 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9653b33c-3e18-4d7f-81ed-febff4a00a35" path="/var/lib/kubelet/pods/9653b33c-3e18-4d7f-81ed-febff4a00a35/volumes" Jan 05 20:08:55 crc kubenswrapper[4754]: I0105 20:08:55.598994 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba326117-1dcc-4468-80bb-a54b9cc83c01" path="/var/lib/kubelet/pods/ba326117-1dcc-4468-80bb-a54b9cc83c01/volumes" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.071861 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.072584 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mq9f8" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="registry-server" containerID="cri-o://8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4" gracePeriod=2 Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.102701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" event={"ID":"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2","Type":"ContainerStarted","Data":"0f6f8f79a88c0ab2a495b45709a67b9860460fa0241351b86c20200cbbe46988"} Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.102771 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" event={"ID":"3db77f8c-170e-42a1-a6a5-ae5188f0c8d2","Type":"ContainerStarted","Data":"d978ffaf6d078239a9f9a5f5dcf5a58c272a2a475749e475a5c3b8dbbea07258"} Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.104065 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.127912 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" podStartSLOduration=2.127897237 podStartE2EDuration="2.127897237s" podCreationTimestamp="2026-01-05 20:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:08:56.126080785 +0000 UTC m=+222.835264669" watchObservedRunningTime="2026-01-05 20:08:56.127897237 +0000 UTC m=+222.837081111" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.519015 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.659892 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content\") pod \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.660463 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svd9d\" (UniqueName: \"kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d\") pod \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.660610 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities\") pod \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\" (UID: \"e75e6f92-c2e3-4a65-be91-5921e2426aaf\") " Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.662544 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities" (OuterVolumeSpecName: "utilities") pod "e75e6f92-c2e3-4a65-be91-5921e2426aaf" (UID: "e75e6f92-c2e3-4a65-be91-5921e2426aaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.666667 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d" (OuterVolumeSpecName: "kube-api-access-svd9d") pod "e75e6f92-c2e3-4a65-be91-5921e2426aaf" (UID: "e75e6f92-c2e3-4a65-be91-5921e2426aaf"). InnerVolumeSpecName "kube-api-access-svd9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.690819 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e75e6f92-c2e3-4a65-be91-5921e2426aaf" (UID: "e75e6f92-c2e3-4a65-be91-5921e2426aaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.771679 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.771720 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svd9d\" (UniqueName: \"kubernetes.io/projected/e75e6f92-c2e3-4a65-be91-5921e2426aaf-kube-api-access-svd9d\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:56 crc kubenswrapper[4754]: I0105 20:08:56.771736 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e6f92-c2e3-4a65-be91-5921e2426aaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.116894 4754 generic.go:334] "Generic (PLEG): container finished" podID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerID="8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4" exitCode=0 Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.117049 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq9f8" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.117837 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerDied","Data":"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4"} Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.117876 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq9f8" event={"ID":"e75e6f92-c2e3-4a65-be91-5921e2426aaf","Type":"ContainerDied","Data":"e5e796d041916bcdc21e3cb99927a2147b77b19312ea5ef4844916151ca97bb7"} Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.117897 4754 scope.go:117] "RemoveContainer" containerID="8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.144678 4754 scope.go:117] "RemoveContainer" containerID="8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.165509 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.171498 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq9f8"] Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.181673 4754 scope.go:117] "RemoveContainer" containerID="81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.198633 4754 scope.go:117] "RemoveContainer" containerID="8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4" Jan 05 20:08:57 crc kubenswrapper[4754]: E0105 20:08:57.198990 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4\": container with ID starting with 8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4 not found: ID does not exist" containerID="8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.199025 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4"} err="failed to get container status \"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4\": rpc error: code = NotFound desc = could not find container \"8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4\": container with ID starting with 8df57d2449b1a550a8b332137789310e5a8f018c0c798a6388984d04a7a494f4 not found: ID does not exist" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.199051 4754 scope.go:117] "RemoveContainer" containerID="8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f" Jan 05 20:08:57 crc kubenswrapper[4754]: E0105 20:08:57.199377 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f\": container with ID starting with 8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f not found: ID does not exist" containerID="8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.199406 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f"} err="failed to get container status \"8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f\": rpc error: code = NotFound desc = could not find container \"8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f\": container with ID starting with 8ecab5308ffec3a62da57a0e91cb0b6e0f8fcff6b8a553782880adafc877167f not found: ID does not exist" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.199424 4754 scope.go:117] "RemoveContainer" containerID="81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86" Jan 05 20:08:57 crc kubenswrapper[4754]: E0105 20:08:57.199676 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86\": container with ID starting with 81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86 not found: ID does not exist" containerID="81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.199704 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86"} err="failed to get container status \"81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86\": rpc error: code = NotFound desc = could not find container \"81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86\": container with ID starting with 81caa9c596183b8beb1bf447d0bf9682ac8cd9be6238e76170f8e8a4f80e9a86 not found: ID does not exist" Jan 05 20:08:57 crc kubenswrapper[4754]: I0105 20:08:57.600209 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" path="/var/lib/kubelet/pods/e75e6f92-c2e3-4a65-be91-5921e2426aaf/volumes" Jan 05 20:09:14 crc kubenswrapper[4754]: I0105 20:09:14.831730 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" Jan 05 20:09:14 crc kubenswrapper[4754]: I0105 20:09:14.912256 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:09:39 crc kubenswrapper[4754]: I0105 20:09:39.968026 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" podUID="cde779ab-81b0-4dc7-a6a4-82db63d46577" containerName="registry" containerID="cri-o://3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427" gracePeriod=30 Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.352501 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.385969 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386245 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386326 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386358 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386468 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386507 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386535 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brsmv\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv\") pod \"cde779ab-81b0-4dc7-a6a4-82db63d46577\" (UID: \"cde779ab-81b0-4dc7-a6a4-82db63d46577\") " Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.386978 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.387119 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.401718 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.406671 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv" (OuterVolumeSpecName: "kube-api-access-brsmv") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "kube-api-access-brsmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.408453 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.410229 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.410559 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.410916 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cde779ab-81b0-4dc7-a6a4-82db63d46577" (UID: "cde779ab-81b0-4dc7-a6a4-82db63d46577"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.424035 4754 generic.go:334] "Generic (PLEG): container finished" podID="cde779ab-81b0-4dc7-a6a4-82db63d46577" containerID="3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427" exitCode=0 Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.424082 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" event={"ID":"cde779ab-81b0-4dc7-a6a4-82db63d46577","Type":"ContainerDied","Data":"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427"} Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.424113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" event={"ID":"cde779ab-81b0-4dc7-a6a4-82db63d46577","Type":"ContainerDied","Data":"a821a565894be20e851a644d1d18f23b1d322c4e175d04045ce7b6e46ca1586d"} Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.424131 4754 scope.go:117] "RemoveContainer" containerID="3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.424243 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q7tmr" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.449916 4754 scope.go:117] "RemoveContainer" containerID="3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427" Jan 05 20:09:40 crc kubenswrapper[4754]: E0105 20:09:40.450968 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427\": container with ID starting with 3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427 not found: ID does not exist" containerID="3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.451043 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427"} err="failed to get container status \"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427\": rpc error: code = NotFound desc = could not find container \"3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427\": container with ID starting with 3c4dafa5e00b3ba80c0ebb7147f5f11d3af47514a922ff374fdc9f95bf8eb427 not found: ID does not exist" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.460999 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.465450 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q7tmr"] Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488373 4754 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488425 4754 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cde779ab-81b0-4dc7-a6a4-82db63d46577-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488439 4754 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488451 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde779ab-81b0-4dc7-a6a4-82db63d46577-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488462 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488472 4754 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cde779ab-81b0-4dc7-a6a4-82db63d46577-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:40 crc kubenswrapper[4754]: I0105 20:09:40.488483 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brsmv\" (UniqueName: \"kubernetes.io/projected/cde779ab-81b0-4dc7-a6a4-82db63d46577-kube-api-access-brsmv\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:41 crc kubenswrapper[4754]: I0105 20:09:41.601719 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde779ab-81b0-4dc7-a6a4-82db63d46577" path="/var/lib/kubelet/pods/cde779ab-81b0-4dc7-a6a4-82db63d46577/volumes" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.034704 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.035208 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" podUID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" containerName="controller-manager" containerID="cri-o://5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f" gracePeriod=30 Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.075513 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.075812 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerName="route-controller-manager" containerID="cri-o://1cab8c42f2612c63ea1efa3d1967edaa6f7d85d852b27923e978796154b74504" gracePeriod=30 Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.466231 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.467032 4754 generic.go:334] "Generic (PLEG): container finished" podID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerID="1cab8c42f2612c63ea1efa3d1967edaa6f7d85d852b27923e978796154b74504" exitCode=0 Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.467099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" event={"ID":"5e3066e3-5277-4bc9-b96b-360fdd4f696f","Type":"ContainerDied","Data":"1cab8c42f2612c63ea1efa3d1967edaa6f7d85d852b27923e978796154b74504"} Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.470933 4754 generic.go:334] "Generic (PLEG): container finished" podID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" containerID="5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f" exitCode=0 Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.470975 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" event={"ID":"f6729d35-8afb-467a-8b7e-f0c3a5f51648","Type":"ContainerDied","Data":"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f"} Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.471001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" event={"ID":"f6729d35-8afb-467a-8b7e-f0c3a5f51648","Type":"ContainerDied","Data":"1f9c168696e650a7ebf775e8e54a4e52037ed266c02b8351bba343dba4cf1c29"} Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.471017 4754 scope.go:117] "RemoveContainer" containerID="5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.470977 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5947857f89-7clr6" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.484904 4754 scope.go:117] "RemoveContainer" containerID="5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f" Jan 05 20:09:44 crc kubenswrapper[4754]: E0105 20:09:44.486251 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f\": container with ID starting with 5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f not found: ID does not exist" containerID="5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.486280 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f"} err="failed to get container status \"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f\": rpc error: code = NotFound desc = could not find container \"5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f\": container with ID starting with 5828f48ab42e6e3e0de4996affab80454fb28118e5ae035d66b5b71cca7c3a8f not found: ID does not exist" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.533424 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.544666 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7vdv\" (UniqueName: \"kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv\") pod \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.544709 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config\") pod \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.544770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert\") pod \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.544825 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca\") pod \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.544871 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles\") pod \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\" (UID: \"f6729d35-8afb-467a-8b7e-f0c3a5f51648\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.546072 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6729d35-8afb-467a-8b7e-f0c3a5f51648" (UID: "f6729d35-8afb-467a-8b7e-f0c3a5f51648"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.546098 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6729d35-8afb-467a-8b7e-f0c3a5f51648" (UID: "f6729d35-8afb-467a-8b7e-f0c3a5f51648"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.546640 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.546673 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.547025 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config" (OuterVolumeSpecName: "config") pod "f6729d35-8afb-467a-8b7e-f0c3a5f51648" (UID: "f6729d35-8afb-467a-8b7e-f0c3a5f51648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.551134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv" (OuterVolumeSpecName: "kube-api-access-b7vdv") pod "f6729d35-8afb-467a-8b7e-f0c3a5f51648" (UID: "f6729d35-8afb-467a-8b7e-f0c3a5f51648"). InnerVolumeSpecName "kube-api-access-b7vdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.553582 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6729d35-8afb-467a-8b7e-f0c3a5f51648" (UID: "f6729d35-8afb-467a-8b7e-f0c3a5f51648"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.647944 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca\") pod \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648019 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config\") pod \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648069 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzz8t\" (UniqueName: \"kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t\") pod \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648095 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert\") pod \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\" (UID: \"5e3066e3-5277-4bc9-b96b-360fdd4f696f\") " Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648409 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7vdv\" (UniqueName: \"kubernetes.io/projected/f6729d35-8afb-467a-8b7e-f0c3a5f51648-kube-api-access-b7vdv\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648428 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6729d35-8afb-467a-8b7e-f0c3a5f51648-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648566 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6729d35-8afb-467a-8b7e-f0c3a5f51648-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.648832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config" (OuterVolumeSpecName: "config") pod "5e3066e3-5277-4bc9-b96b-360fdd4f696f" (UID: "5e3066e3-5277-4bc9-b96b-360fdd4f696f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.649811 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e3066e3-5277-4bc9-b96b-360fdd4f696f" (UID: "5e3066e3-5277-4bc9-b96b-360fdd4f696f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.650955 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e3066e3-5277-4bc9-b96b-360fdd4f696f" (UID: "5e3066e3-5277-4bc9-b96b-360fdd4f696f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.651063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t" (OuterVolumeSpecName: "kube-api-access-rzz8t") pod "5e3066e3-5277-4bc9-b96b-360fdd4f696f" (UID: "5e3066e3-5277-4bc9-b96b-360fdd4f696f"). InnerVolumeSpecName "kube-api-access-rzz8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.750040 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzz8t\" (UniqueName: \"kubernetes.io/projected/5e3066e3-5277-4bc9-b96b-360fdd4f696f-kube-api-access-rzz8t\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.750079 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3066e3-5277-4bc9-b96b-360fdd4f696f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.750088 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.750099 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3066e3-5277-4bc9-b96b-360fdd4f696f-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.811722 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:09:44 crc kubenswrapper[4754]: I0105 20:09:44.815359 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5947857f89-7clr6"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.298675 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66dd56655b-c2cjs"] Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299308 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="extract-content" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299325 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="extract-content" Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299339 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerName="route-controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299346 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerName="route-controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299367 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" containerName="controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299376 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" containerName="controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299397 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="registry-server" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299407 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="registry-server" Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299420 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="extract-utilities" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299427 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="extract-utilities" Jan 05 20:09:45 crc kubenswrapper[4754]: E0105 20:09:45.299437 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde779ab-81b0-4dc7-a6a4-82db63d46577" containerName="registry" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299445 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde779ab-81b0-4dc7-a6a4-82db63d46577" containerName="registry" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299573 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" containerName="controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299593 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde779ab-81b0-4dc7-a6a4-82db63d46577" containerName="registry" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299605 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" containerName="route-controller-manager" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.299621 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75e6f92-c2e3-4a65-be91-5921e2426aaf" containerName="registry-server" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.300101 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.304252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.304448 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.304568 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.304684 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.306237 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.310138 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.310831 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.312329 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.327054 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66dd56655b-c2cjs"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.327526 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.332224 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7261372-b88c-4c55-bd6e-48fd2dd88614-serving-cert\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358179 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crjfb\" (UniqueName: \"kubernetes.io/projected/26564ac3-15de-49b9-8ba9-d7075ba66cb5-kube-api-access-crjfb\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358218 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-proxy-ca-bundles\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358269 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-client-ca\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358327 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-config\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358351 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-config\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-client-ca\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358419 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26564ac3-15de-49b9-8ba9-d7075ba66cb5-serving-cert\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.358474 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpr8\" (UniqueName: \"kubernetes.io/projected/a7261372-b88c-4c55-bd6e-48fd2dd88614-kube-api-access-4vpr8\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-proxy-ca-bundles\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459634 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-client-ca\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459680 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-config\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459706 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-config\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459746 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-client-ca\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26564ac3-15de-49b9-8ba9-d7075ba66cb5-serving-cert\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459803 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpr8\" (UniqueName: \"kubernetes.io/projected/a7261372-b88c-4c55-bd6e-48fd2dd88614-kube-api-access-4vpr8\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459830 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7261372-b88c-4c55-bd6e-48fd2dd88614-serving-cert\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.459866 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crjfb\" (UniqueName: \"kubernetes.io/projected/26564ac3-15de-49b9-8ba9-d7075ba66cb5-kube-api-access-crjfb\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.461221 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-config\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.461238 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-client-ca\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.461566 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26564ac3-15de-49b9-8ba9-d7075ba66cb5-proxy-ca-bundles\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.461570 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-client-ca\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.462282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7261372-b88c-4c55-bd6e-48fd2dd88614-config\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.467879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26564ac3-15de-49b9-8ba9-d7075ba66cb5-serving-cert\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.467936 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7261372-b88c-4c55-bd6e-48fd2dd88614-serving-cert\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.480886 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" event={"ID":"5e3066e3-5277-4bc9-b96b-360fdd4f696f","Type":"ContainerDied","Data":"9055bbb2a3f4c6fdfcfb3f2778b29413e349b7fd776816c02f73fa61a5b9f9f2"} Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.480952 4754 scope.go:117] "RemoveContainer" containerID="1cab8c42f2612c63ea1efa3d1967edaa6f7d85d852b27923e978796154b74504" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.481070 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.483113 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpr8\" (UniqueName: \"kubernetes.io/projected/a7261372-b88c-4c55-bd6e-48fd2dd88614-kube-api-access-4vpr8\") pod \"route-controller-manager-7bb885d9c5-txdbv\" (UID: \"a7261372-b88c-4c55-bd6e-48fd2dd88614\") " pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.489038 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crjfb\" (UniqueName: \"kubernetes.io/projected/26564ac3-15de-49b9-8ba9-d7075ba66cb5-kube-api-access-crjfb\") pod \"controller-manager-66dd56655b-c2cjs\" (UID: \"26564ac3-15de-49b9-8ba9-d7075ba66cb5\") " pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.560058 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.569932 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-z77ks"] Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.595729 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3066e3-5277-4bc9-b96b-360fdd4f696f" path="/var/lib/kubelet/pods/5e3066e3-5277-4bc9-b96b-360fdd4f696f/volumes" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.596789 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6729d35-8afb-467a-8b7e-f0c3a5f51648" path="/var/lib/kubelet/pods/f6729d35-8afb-467a-8b7e-f0c3a5f51648/volumes" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.630519 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:45 crc kubenswrapper[4754]: I0105 20:09:45.639036 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:46 crc kubenswrapper[4754]: I0105 20:09:46.110340 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66dd56655b-c2cjs"] Jan 05 20:09:46 crc kubenswrapper[4754]: W0105 20:09:46.111455 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26564ac3_15de_49b9_8ba9_d7075ba66cb5.slice/crio-f40d95e5838e7e97816312462a46ca31a6bd80e435256f5c591736124a7ce752 WatchSource:0}: Error finding container f40d95e5838e7e97816312462a46ca31a6bd80e435256f5c591736124a7ce752: Status 404 returned error can't find the container with id f40d95e5838e7e97816312462a46ca31a6bd80e435256f5c591736124a7ce752 Jan 05 20:09:46 crc kubenswrapper[4754]: I0105 20:09:46.131370 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv"] Jan 05 20:09:46 crc kubenswrapper[4754]: I0105 20:09:46.492474 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" event={"ID":"a7261372-b88c-4c55-bd6e-48fd2dd88614","Type":"ContainerStarted","Data":"5336fc4ed91847ea3d6b6093e119fc449dcbc4e842ea047c6987496a2d77f63b"} Jan 05 20:09:46 crc kubenswrapper[4754]: I0105 20:09:46.494583 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" event={"ID":"26564ac3-15de-49b9-8ba9-d7075ba66cb5","Type":"ContainerStarted","Data":"f40d95e5838e7e97816312462a46ca31a6bd80e435256f5c591736124a7ce752"} Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.505283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" event={"ID":"26564ac3-15de-49b9-8ba9-d7075ba66cb5","Type":"ContainerStarted","Data":"9e957a13bd539c39220493f14013f8da803d8e9105d476615e85cb54e47f7efa"} Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.508683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" event={"ID":"a7261372-b88c-4c55-bd6e-48fd2dd88614","Type":"ContainerStarted","Data":"f57a0f78336046858cd176af9b196d5a3d35c18e608e8167e3eb618ce4c787a7"} Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.509036 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.515192 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.535459 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" podStartSLOduration=3.5354403899999998 podStartE2EDuration="3.53544039s" podCreationTimestamp="2026-01-05 20:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:09:47.531572176 +0000 UTC m=+274.240756060" watchObservedRunningTime="2026-01-05 20:09:47.53544039 +0000 UTC m=+274.244624274" Jan 05 20:09:47 crc kubenswrapper[4754]: I0105 20:09:47.552337 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" podStartSLOduration=3.552313095 podStartE2EDuration="3.552313095s" podCreationTimestamp="2026-01-05 20:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:09:47.549687674 +0000 UTC m=+274.258871548" watchObservedRunningTime="2026-01-05 20:09:47.552313095 +0000 UTC m=+274.261496969" Jan 05 20:09:48 crc kubenswrapper[4754]: I0105 20:09:48.515729 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:48 crc kubenswrapper[4754]: I0105 20:09:48.524273 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.403494 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.404957 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pg5fc" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" containerID="cri-o://0c4236a81108e383f7f72c5616ac1e16180aab7424751ee11a95eb3c30eb5ca0" gracePeriod=30 Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.429528 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.431539 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qh2hz" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="registry-server" containerID="cri-o://39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b" gracePeriod=30 Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.440996 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.444191 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" containerID="cri-o://13a34190694c6477f1b1a29042c02258d63ea33c2266ce809da66651abcfa7e3" gracePeriod=30 Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.452618 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.452978 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f6htm" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="registry-server" containerID="cri-o://2384d715e69bbab1550ea8532e5a4fd4bfe54d6b0ba8bf7763081f9ac61dd186" gracePeriod=30 Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.460339 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.460690 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z52n8" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="registry-server" containerID="cri-o://a44121cf6eec140f3a35fb84703654954c411693aac98013b1e410fc9e99b716" gracePeriod=30 Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.470328 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ggkzj"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.471425 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.478787 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ggkzj"] Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.491437 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-pg5fc" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" probeResult="failure" output="" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.500352 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-pg5fc" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" probeResult="failure" output="" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.591980 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.592595 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsln\" (UniqueName: \"kubernetes.io/projected/41d97351-8dc4-42de-bf00-4e8abbf24e0b-kube-api-access-cdsln\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.592665 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.694174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsln\" (UniqueName: \"kubernetes.io/projected/41d97351-8dc4-42de-bf00-4e8abbf24e0b-kube-api-access-cdsln\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.695452 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.695551 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.707508 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.710895 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d97351-8dc4-42de-bf00-4e8abbf24e0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.728079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsln\" (UniqueName: \"kubernetes.io/projected/41d97351-8dc4-42de-bf00-4e8abbf24e0b-kube-api-access-cdsln\") pod \"marketplace-operator-79b997595-ggkzj\" (UID: \"41d97351-8dc4-42de-bf00-4e8abbf24e0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:51 crc kubenswrapper[4754]: I0105 20:09:51.789007 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.285990 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ggkzj"] Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.449459 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.506452 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84vjp\" (UniqueName: \"kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp\") pod \"20730210-a087-46e2-a311-ffa8a3bc370d\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.506539 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content\") pod \"20730210-a087-46e2-a311-ffa8a3bc370d\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.506575 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities\") pod \"20730210-a087-46e2-a311-ffa8a3bc370d\" (UID: \"20730210-a087-46e2-a311-ffa8a3bc370d\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.513776 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities" (OuterVolumeSpecName: "utilities") pod "20730210-a087-46e2-a311-ffa8a3bc370d" (UID: "20730210-a087-46e2-a311-ffa8a3bc370d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.516500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp" (OuterVolumeSpecName: "kube-api-access-84vjp") pod "20730210-a087-46e2-a311-ffa8a3bc370d" (UID: "20730210-a087-46e2-a311-ffa8a3bc370d"). InnerVolumeSpecName "kube-api-access-84vjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.570383 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20730210-a087-46e2-a311-ffa8a3bc370d" (UID: "20730210-a087-46e2-a311-ffa8a3bc370d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.572330 4754 generic.go:334] "Generic (PLEG): container finished" podID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerID="2384d715e69bbab1550ea8532e5a4fd4bfe54d6b0ba8bf7763081f9ac61dd186" exitCode=0 Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.572412 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerDied","Data":"2384d715e69bbab1550ea8532e5a4fd4bfe54d6b0ba8bf7763081f9ac61dd186"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.575547 4754 generic.go:334] "Generic (PLEG): container finished" podID="31687145-8349-44b0-9e77-de73e4738916" containerID="a44121cf6eec140f3a35fb84703654954c411693aac98013b1e410fc9e99b716" exitCode=0 Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.575776 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerDied","Data":"a44121cf6eec140f3a35fb84703654954c411693aac98013b1e410fc9e99b716"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.582509 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" event={"ID":"41d97351-8dc4-42de-bf00-4e8abbf24e0b","Type":"ContainerStarted","Data":"e40c5783dab430607792333a13d1d5d8e8befeade9ca0139cf47ae1825776ee1"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.582717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" event={"ID":"41d97351-8dc4-42de-bf00-4e8abbf24e0b","Type":"ContainerStarted","Data":"250b5b891932d3579a180b2392e4a32618b8f5d99f3d080c7fa8b10a3f392f0a"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.582946 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.587424 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ggkzj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.587498 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podUID="41d97351-8dc4-42de-bf00-4e8abbf24e0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.596102 4754 generic.go:334] "Generic (PLEG): container finished" podID="20730210-a087-46e2-a311-ffa8a3bc370d" containerID="39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b" exitCode=0 Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.596194 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerDied","Data":"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.596237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh2hz" event={"ID":"20730210-a087-46e2-a311-ffa8a3bc370d","Type":"ContainerDied","Data":"6b9012971ef18c399c87e5cf080de12557d44cb98fc3049ed50eb38067fa1180"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.596267 4754 scope.go:117] "RemoveContainer" containerID="39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.596490 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh2hz" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.606194 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podStartSLOduration=1.606170211 podStartE2EDuration="1.606170211s" podCreationTimestamp="2026-01-05 20:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:09:52.604726002 +0000 UTC m=+279.313909876" watchObservedRunningTime="2026-01-05 20:09:52.606170211 +0000 UTC m=+279.315354085" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.609106 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84vjp\" (UniqueName: \"kubernetes.io/projected/20730210-a087-46e2-a311-ffa8a3bc370d-kube-api-access-84vjp\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.609143 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.609157 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20730210-a087-46e2-a311-ffa8a3bc370d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.609720 4754 generic.go:334] "Generic (PLEG): container finished" podID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerID="0c4236a81108e383f7f72c5616ac1e16180aab7424751ee11a95eb3c30eb5ca0" exitCode=0 Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.609807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerDied","Data":"0c4236a81108e383f7f72c5616ac1e16180aab7424751ee11a95eb3c30eb5ca0"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.617523 4754 generic.go:334] "Generic (PLEG): container finished" podID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerID="13a34190694c6477f1b1a29042c02258d63ea33c2266ce809da66651abcfa7e3" exitCode=0 Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.617571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerDied","Data":"13a34190694c6477f1b1a29042c02258d63ea33c2266ce809da66651abcfa7e3"} Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.640593 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.641706 4754 scope.go:117] "RemoveContainer" containerID="b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.644654 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qh2hz"] Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.700050 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.700637 4754 scope.go:117] "RemoveContainer" containerID="4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.723285 4754 scope.go:117] "RemoveContainer" containerID="39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b" Jan 05 20:09:52 crc kubenswrapper[4754]: E0105 20:09:52.723933 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b\": container with ID starting with 39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b not found: ID does not exist" containerID="39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.723982 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b"} err="failed to get container status \"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b\": rpc error: code = NotFound desc = could not find container \"39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b\": container with ID starting with 39728b09f8fe465478f6f6b7144e90ec0207f904bf7a659c280b7dab0b4d1c7b not found: ID does not exist" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.724017 4754 scope.go:117] "RemoveContainer" containerID="b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc" Jan 05 20:09:52 crc kubenswrapper[4754]: E0105 20:09:52.724389 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc\": container with ID starting with b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc not found: ID does not exist" containerID="b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.724455 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc"} err="failed to get container status \"b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc\": rpc error: code = NotFound desc = could not find container \"b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc\": container with ID starting with b3a9e2dfbc01cc47a4c2ff653d894c30cde3002288b2590db8485ce168383ccc not found: ID does not exist" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.724488 4754 scope.go:117] "RemoveContainer" containerID="4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b" Jan 05 20:09:52 crc kubenswrapper[4754]: E0105 20:09:52.724851 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b\": container with ID starting with 4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b not found: ID does not exist" containerID="4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.724918 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b"} err="failed to get container status \"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b\": rpc error: code = NotFound desc = could not find container \"4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b\": container with ID starting with 4c7539f7b9956afa33a4be0fba0c0797d81694b988643ffdab3715f49b86f01b not found: ID does not exist" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.724958 4754 scope.go:117] "RemoveContainer" containerID="92a6ca4cb09d04d61122813085a0c01187e7196c4f8e8b79a7ba3f807431bf6a" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.741553 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.750449 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.756085 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811712 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities\") pod \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811796 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities\") pod \"08caaa2f-775e-45c6-b097-b0550f593ff3\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811852 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content\") pod \"08caaa2f-775e-45c6-b097-b0550f593ff3\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811904 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca\") pod \"c5e9d216-d5aa-409f-b657-259b931ceaf5\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811946 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcdmf\" (UniqueName: \"kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf\") pod \"08caaa2f-775e-45c6-b097-b0550f593ff3\" (UID: \"08caaa2f-775e-45c6-b097-b0550f593ff3\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811968 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69llf\" (UniqueName: \"kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf\") pod \"c5e9d216-d5aa-409f-b657-259b931ceaf5\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.811995 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics\") pod \"c5e9d216-d5aa-409f-b657-259b931ceaf5\" (UID: \"c5e9d216-d5aa-409f-b657-259b931ceaf5\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.812013 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities\") pod \"31687145-8349-44b0-9e77-de73e4738916\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.812047 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6ncj\" (UniqueName: \"kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj\") pod \"31687145-8349-44b0-9e77-de73e4738916\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.812089 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pftt\" (UniqueName: \"kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt\") pod \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.812115 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content\") pod \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\" (UID: \"e16df7cb-3531-4b4e-ad51-275d4ff495d0\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.812161 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content\") pod \"31687145-8349-44b0-9e77-de73e4738916\" (UID: \"31687145-8349-44b0-9e77-de73e4738916\") " Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.815906 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf" (OuterVolumeSpecName: "kube-api-access-69llf") pod "c5e9d216-d5aa-409f-b657-259b931ceaf5" (UID: "c5e9d216-d5aa-409f-b657-259b931ceaf5"). InnerVolumeSpecName "kube-api-access-69llf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.815959 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj" (OuterVolumeSpecName: "kube-api-access-c6ncj") pod "31687145-8349-44b0-9e77-de73e4738916" (UID: "31687145-8349-44b0-9e77-de73e4738916"). InnerVolumeSpecName "kube-api-access-c6ncj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.816568 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c5e9d216-d5aa-409f-b657-259b931ceaf5" (UID: "c5e9d216-d5aa-409f-b657-259b931ceaf5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.816714 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities" (OuterVolumeSpecName: "utilities") pod "e16df7cb-3531-4b4e-ad51-275d4ff495d0" (UID: "e16df7cb-3531-4b4e-ad51-275d4ff495d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.817403 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities" (OuterVolumeSpecName: "utilities") pod "31687145-8349-44b0-9e77-de73e4738916" (UID: "31687145-8349-44b0-9e77-de73e4738916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.817797 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c5e9d216-d5aa-409f-b657-259b931ceaf5" (UID: "c5e9d216-d5aa-409f-b657-259b931ceaf5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.818865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf" (OuterVolumeSpecName: "kube-api-access-wcdmf") pod "08caaa2f-775e-45c6-b097-b0550f593ff3" (UID: "08caaa2f-775e-45c6-b097-b0550f593ff3"). InnerVolumeSpecName "kube-api-access-wcdmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.823123 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities" (OuterVolumeSpecName: "utilities") pod "08caaa2f-775e-45c6-b097-b0550f593ff3" (UID: "08caaa2f-775e-45c6-b097-b0550f593ff3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.836587 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt" (OuterVolumeSpecName: "kube-api-access-7pftt") pod "e16df7cb-3531-4b4e-ad51-275d4ff495d0" (UID: "e16df7cb-3531-4b4e-ad51-275d4ff495d0"). InnerVolumeSpecName "kube-api-access-7pftt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.847071 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16df7cb-3531-4b4e-ad51-275d4ff495d0" (UID: "e16df7cb-3531-4b4e-ad51-275d4ff495d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.865229 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08caaa2f-775e-45c6-b097-b0550f593ff3" (UID: "08caaa2f-775e-45c6-b097-b0550f593ff3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.912744 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.912927 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.912987 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08caaa2f-775e-45c6-b097-b0550f593ff3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913045 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913129 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcdmf\" (UniqueName: \"kubernetes.io/projected/08caaa2f-775e-45c6-b097-b0550f593ff3-kube-api-access-wcdmf\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913196 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69llf\" (UniqueName: \"kubernetes.io/projected/c5e9d216-d5aa-409f-b657-259b931ceaf5-kube-api-access-69llf\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913250 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c5e9d216-d5aa-409f-b657-259b931ceaf5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913321 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913401 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6ncj\" (UniqueName: \"kubernetes.io/projected/31687145-8349-44b0-9e77-de73e4738916-kube-api-access-c6ncj\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913465 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pftt\" (UniqueName: \"kubernetes.io/projected/e16df7cb-3531-4b4e-ad51-275d4ff495d0-kube-api-access-7pftt\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.913521 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16df7cb-3531-4b4e-ad51-275d4ff495d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:52 crc kubenswrapper[4754]: I0105 20:09:52.940820 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31687145-8349-44b0-9e77-de73e4738916" (UID: "31687145-8349-44b0-9e77-de73e4738916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.015221 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31687145-8349-44b0-9e77-de73e4738916-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.597969 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" path="/var/lib/kubelet/pods/20730210-a087-46e2-a311-ffa8a3bc370d/volumes" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.640585 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg5fc" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.640782 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg5fc" event={"ID":"08caaa2f-775e-45c6-b097-b0550f593ff3","Type":"ContainerDied","Data":"e30414288abc7d37c580a2a164c3ce7f28e595a063d92aacb1caf912d6f1b109"} Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.640840 4754 scope.go:117] "RemoveContainer" containerID="0c4236a81108e383f7f72c5616ac1e16180aab7424751ee11a95eb3c30eb5ca0" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.646170 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" event={"ID":"c5e9d216-d5aa-409f-b657-259b931ceaf5","Type":"ContainerDied","Data":"120d40c5b11106c3c14a6efb4fb0f565ff94324b3339fed8c291974b4fbb7bbf"} Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.646261 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-57nsw" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.648755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6htm" event={"ID":"e16df7cb-3531-4b4e-ad51-275d4ff495d0","Type":"ContainerDied","Data":"95248699f19701edd79dde31d5bae21743c2b5602810a673ede4f48053fe7e4d"} Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.648913 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6htm" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.686802 4754 scope.go:117] "RemoveContainer" containerID="5b9c7d81022cdf37b2a976db7f2c5e256e56aab0bc98c18a4d27b299d9c9efb1" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.701770 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z52n8" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.702794 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z52n8" event={"ID":"31687145-8349-44b0-9e77-de73e4738916","Type":"ContainerDied","Data":"f877c34b0aece8098d6f34391e82a014933227cf264e424983d16ff5a9c51acb"} Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.709910 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.718426 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pg5fc"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.718625 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.720865 4754 scope.go:117] "RemoveContainer" containerID="0842c03556dd94b67e801ea64b09d2cd0abb1d9f83cdc4671975f1bd5bf5446d" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.727523 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.735230 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-57nsw"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.743947 4754 scope.go:117] "RemoveContainer" containerID="13a34190694c6477f1b1a29042c02258d63ea33c2266ce809da66651abcfa7e3" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.746794 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.753232 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6htm"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.758183 4754 scope.go:117] "RemoveContainer" containerID="2384d715e69bbab1550ea8532e5a4fd4bfe54d6b0ba8bf7763081f9ac61dd186" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.760074 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.765058 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z52n8"] Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.771513 4754 scope.go:117] "RemoveContainer" containerID="f68928d3218dbec09dd7609a60ac3a075fbb77c523cb3cce4a8b4a0194e0c6f0" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.785419 4754 scope.go:117] "RemoveContainer" containerID="1292ee55bf25c8eb9c9981d0f4a4f02a72a9ef48fc8a62f3d9e28b716d3b89b7" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.807809 4754 scope.go:117] "RemoveContainer" containerID="a44121cf6eec140f3a35fb84703654954c411693aac98013b1e410fc9e99b716" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.825612 4754 scope.go:117] "RemoveContainer" containerID="f09d8a916b34d02dc7be3daf692d9f61cadb83640dc604966cb5925f81129c4d" Jan 05 20:09:53 crc kubenswrapper[4754]: I0105 20:09:53.842922 4754 scope.go:117] "RemoveContainer" containerID="516fe302b48521acf7aaf85d50c60d84af871ada2a68486e6f373fe76a292f7d" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.602129 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" path="/var/lib/kubelet/pods/08caaa2f-775e-45c6-b097-b0550f593ff3/volumes" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.603013 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31687145-8349-44b0-9e77-de73e4738916" path="/var/lib/kubelet/pods/31687145-8349-44b0-9e77-de73e4738916/volumes" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.603677 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" path="/var/lib/kubelet/pods/c5e9d216-d5aa-409f-b657-259b931ceaf5/volumes" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.607110 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" path="/var/lib/kubelet/pods/e16df7cb-3531-4b4e-ad51-275d4ff495d0/volumes" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.610422 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8r5p"] Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611088 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611115 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611127 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611135 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611146 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611154 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611167 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611175 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611185 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611192 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611205 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611213 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611225 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611232 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611242 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611249 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611259 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611266 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611277 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611285 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611339 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611348 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="extract-content" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611358 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611367 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611377 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611385 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: E0105 20:09:55.611396 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611403 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="extract-utilities" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611511 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="31687145-8349-44b0-9e77-de73e4738916" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611523 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16df7cb-3531-4b4e-ad51-275d4ff495d0" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611532 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611542 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="08caaa2f-775e-45c6-b097-b0550f593ff3" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611554 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="20730210-a087-46e2-a311-ffa8a3bc370d" containerName="registry-server" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.611563 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e9d216-d5aa-409f-b657-259b931ceaf5" containerName="marketplace-operator" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.612387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.615502 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.621282 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8r5p"] Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.667244 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-catalog-content\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.667371 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvcv\" (UniqueName: \"kubernetes.io/projected/5f585df1-958f-4733-a720-2d37460d2b12-kube-api-access-qsvcv\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.667545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-utilities\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.769760 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsvcv\" (UniqueName: \"kubernetes.io/projected/5f585df1-958f-4733-a720-2d37460d2b12-kube-api-access-qsvcv\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.769959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-utilities\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.770101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-catalog-content\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.770527 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-catalog-content\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.770556 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f585df1-958f-4733-a720-2d37460d2b12-utilities\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.791356 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsvcv\" (UniqueName: \"kubernetes.io/projected/5f585df1-958f-4733-a720-2d37460d2b12-kube-api-access-qsvcv\") pod \"certified-operators-j8r5p\" (UID: \"5f585df1-958f-4733-a720-2d37460d2b12\") " pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.814436 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n9kmf"] Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.818529 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.820554 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.824752 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9kmf"] Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.871665 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-utilities\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.871768 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26p4\" (UniqueName: \"kubernetes.io/projected/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-kube-api-access-w26p4\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.871854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-catalog-content\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.973007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w26p4\" (UniqueName: \"kubernetes.io/projected/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-kube-api-access-w26p4\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.973623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-catalog-content\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.973706 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-utilities\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.974159 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-catalog-content\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.974210 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-utilities\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.988178 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:09:55 crc kubenswrapper[4754]: I0105 20:09:55.995033 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26p4\" (UniqueName: \"kubernetes.io/projected/187150cd-d7a9-4dfd-8151-0e6a88e82ddc-kube-api-access-w26p4\") pod \"community-operators-n9kmf\" (UID: \"187150cd-d7a9-4dfd-8151-0e6a88e82ddc\") " pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.169423 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.388162 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8r5p"] Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.590516 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n9kmf"] Jan 05 20:09:56 crc kubenswrapper[4754]: W0105 20:09:56.607592 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187150cd_d7a9_4dfd_8151_0e6a88e82ddc.slice/crio-ef052317e11be03ff3db5de4d2a5c8d23f43943cfd4e952475df9b29825ab260 WatchSource:0}: Error finding container ef052317e11be03ff3db5de4d2a5c8d23f43943cfd4e952475df9b29825ab260: Status 404 returned error can't find the container with id ef052317e11be03ff3db5de4d2a5c8d23f43943cfd4e952475df9b29825ab260 Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.725352 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerStarted","Data":"ef052317e11be03ff3db5de4d2a5c8d23f43943cfd4e952475df9b29825ab260"} Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.727187 4754 generic.go:334] "Generic (PLEG): container finished" podID="5f585df1-958f-4733-a720-2d37460d2b12" containerID="b516c8478b0332fcce397aa2d9ea7873a8cd9bf03135ca14d1e19bd62bab0766" exitCode=0 Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.727362 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerDied","Data":"b516c8478b0332fcce397aa2d9ea7873a8cd9bf03135ca14d1e19bd62bab0766"} Jan 05 20:09:56 crc kubenswrapper[4754]: I0105 20:09:56.727442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerStarted","Data":"cd1cdb28367aaa490ed8940a9c395324ff8b99fb5f00c7feacb989da8e311b44"} Jan 05 20:09:57 crc kubenswrapper[4754]: I0105 20:09:57.740180 4754 generic.go:334] "Generic (PLEG): container finished" podID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerID="35955f5f0021ae9ed0a8e7921d29ae3667f40155757b90d8a7120768e5f55b8c" exitCode=0 Jan 05 20:09:57 crc kubenswrapper[4754]: I0105 20:09:57.740376 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerDied","Data":"35955f5f0021ae9ed0a8e7921d29ae3667f40155757b90d8a7120768e5f55b8c"} Jan 05 20:09:57 crc kubenswrapper[4754]: I0105 20:09:57.743967 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerStarted","Data":"8c6e9fb53f34b16162654fb3b6cb85b697dbbba5ffec8bc062b90439bcf8891e"} Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.212205 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tjtwh"] Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.217521 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.223795 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.231279 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjtwh"] Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.310307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/3af58cc4-e753-4eb0-91c9-8b93516d665e-kube-api-access-ldkxj\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.310386 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-catalog-content\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.310426 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-utilities\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.402999 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kk2wq"] Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.404222 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.407177 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.411534 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/3af58cc4-e753-4eb0-91c9-8b93516d665e-kube-api-access-ldkxj\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.411679 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-catalog-content\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.411798 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-utilities\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.412390 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-utilities\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.412608 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af58cc4-e753-4eb0-91c9-8b93516d665e-catalog-content\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.419468 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kk2wq"] Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.436444 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/3af58cc4-e753-4eb0-91c9-8b93516d665e-kube-api-access-ldkxj\") pod \"redhat-marketplace-tjtwh\" (UID: \"3af58cc4-e753-4eb0-91c9-8b93516d665e\") " pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.513265 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-utilities\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.513419 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwj4\" (UniqueName: \"kubernetes.io/projected/dabb102c-20ff-4424-95d7-d26f22f594f5-kube-api-access-bvwj4\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.513482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-catalog-content\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.579432 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.614154 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-catalog-content\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.614385 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-utilities\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.614429 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwj4\" (UniqueName: \"kubernetes.io/projected/dabb102c-20ff-4424-95d7-d26f22f594f5-kube-api-access-bvwj4\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.614672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-catalog-content\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.614889 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabb102c-20ff-4424-95d7-d26f22f594f5-utilities\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.632829 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwj4\" (UniqueName: \"kubernetes.io/projected/dabb102c-20ff-4424-95d7-d26f22f594f5-kube-api-access-bvwj4\") pod \"redhat-operators-kk2wq\" (UID: \"dabb102c-20ff-4424-95d7-d26f22f594f5\") " pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.732240 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.760805 4754 generic.go:334] "Generic (PLEG): container finished" podID="5f585df1-958f-4733-a720-2d37460d2b12" containerID="8c6e9fb53f34b16162654fb3b6cb85b697dbbba5ffec8bc062b90439bcf8891e" exitCode=0 Jan 05 20:09:58 crc kubenswrapper[4754]: I0105 20:09:58.760844 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerDied","Data":"8c6e9fb53f34b16162654fb3b6cb85b697dbbba5ffec8bc062b90439bcf8891e"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.021500 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjtwh"] Jan 05 20:09:59 crc kubenswrapper[4754]: W0105 20:09:59.040864 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3af58cc4_e753_4eb0_91c9_8b93516d665e.slice/crio-4310056fb90f8dfb788e846aa0bc433f658b5cdf0bafbf4161a7b472f33c4f89 WatchSource:0}: Error finding container 4310056fb90f8dfb788e846aa0bc433f658b5cdf0bafbf4161a7b472f33c4f89: Status 404 returned error can't find the container with id 4310056fb90f8dfb788e846aa0bc433f658b5cdf0bafbf4161a7b472f33c4f89 Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.139978 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kk2wq"] Jan 05 20:09:59 crc kubenswrapper[4754]: W0105 20:09:59.146202 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddabb102c_20ff_4424_95d7_d26f22f594f5.slice/crio-a14f59e42be1c4a9cff6a8751b4da9cb8b29114ae314a422e2df5645fa4a8633 WatchSource:0}: Error finding container a14f59e42be1c4a9cff6a8751b4da9cb8b29114ae314a422e2df5645fa4a8633: Status 404 returned error can't find the container with id a14f59e42be1c4a9cff6a8751b4da9cb8b29114ae314a422e2df5645fa4a8633 Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.770717 4754 generic.go:334] "Generic (PLEG): container finished" podID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerID="4f30f5267733562d0f945a419de27778ee35f91fc0dde01aaf0191385d3b23f9" exitCode=0 Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.770813 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerDied","Data":"4f30f5267733562d0f945a419de27778ee35f91fc0dde01aaf0191385d3b23f9"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.781051 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerStarted","Data":"da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.782686 4754 generic.go:334] "Generic (PLEG): container finished" podID="3af58cc4-e753-4eb0-91c9-8b93516d665e" containerID="6910d0b3371224e6e33eb8c0e09674227140a7e44b0668123aaa9baa348d5c22" exitCode=0 Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.782728 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjtwh" event={"ID":"3af58cc4-e753-4eb0-91c9-8b93516d665e","Type":"ContainerDied","Data":"6910d0b3371224e6e33eb8c0e09674227140a7e44b0668123aaa9baa348d5c22"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.782743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjtwh" event={"ID":"3af58cc4-e753-4eb0-91c9-8b93516d665e","Type":"ContainerStarted","Data":"4310056fb90f8dfb788e846aa0bc433f658b5cdf0bafbf4161a7b472f33c4f89"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.787229 4754 generic.go:334] "Generic (PLEG): container finished" podID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerID="34c6c68c5e60401a08142c14fd86da76069173e633d2173eaf3be094b8c30f01" exitCode=0 Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.787277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kk2wq" event={"ID":"dabb102c-20ff-4424-95d7-d26f22f594f5","Type":"ContainerDied","Data":"34c6c68c5e60401a08142c14fd86da76069173e633d2173eaf3be094b8c30f01"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.787326 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kk2wq" event={"ID":"dabb102c-20ff-4424-95d7-d26f22f594f5","Type":"ContainerStarted","Data":"a14f59e42be1c4a9cff6a8751b4da9cb8b29114ae314a422e2df5645fa4a8633"} Jan 05 20:09:59 crc kubenswrapper[4754]: I0105 20:09:59.805076 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8r5p" podStartSLOduration=2.306892533 podStartE2EDuration="4.805060225s" podCreationTimestamp="2026-01-05 20:09:55 +0000 UTC" firstStartedPulling="2026-01-05 20:09:56.728797796 +0000 UTC m=+283.437981680" lastFinishedPulling="2026-01-05 20:09:59.226965498 +0000 UTC m=+285.936149372" observedRunningTime="2026-01-05 20:09:59.803140163 +0000 UTC m=+286.512324037" watchObservedRunningTime="2026-01-05 20:09:59.805060225 +0000 UTC m=+286.514244099" Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.802016 4754 generic.go:334] "Generic (PLEG): container finished" podID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerID="48c6656862e466fa862705c64e104fc47f2134d4665e8850f8339648998a6a74" exitCode=0 Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.802626 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kk2wq" event={"ID":"dabb102c-20ff-4424-95d7-d26f22f594f5","Type":"ContainerDied","Data":"48c6656862e466fa862705c64e104fc47f2134d4665e8850f8339648998a6a74"} Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.806392 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerStarted","Data":"9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9"} Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.809452 4754 generic.go:334] "Generic (PLEG): container finished" podID="3af58cc4-e753-4eb0-91c9-8b93516d665e" containerID="e4fde04b2a7b5167491f0c0fba9a59f1aa69a5cd3236e6635d869b074709d6b2" exitCode=0 Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.809483 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjtwh" event={"ID":"3af58cc4-e753-4eb0-91c9-8b93516d665e","Type":"ContainerDied","Data":"e4fde04b2a7b5167491f0c0fba9a59f1aa69a5cd3236e6635d869b074709d6b2"} Jan 05 20:10:01 crc kubenswrapper[4754]: I0105 20:10:01.872336 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n9kmf" podStartSLOduration=3.864568332 podStartE2EDuration="6.872287546s" podCreationTimestamp="2026-01-05 20:09:55 +0000 UTC" firstStartedPulling="2026-01-05 20:09:57.742350585 +0000 UTC m=+284.451534459" lastFinishedPulling="2026-01-05 20:10:00.750069789 +0000 UTC m=+287.459253673" observedRunningTime="2026-01-05 20:10:01.866920252 +0000 UTC m=+288.576104146" watchObservedRunningTime="2026-01-05 20:10:01.872287546 +0000 UTC m=+288.581471430" Jan 05 20:10:02 crc kubenswrapper[4754]: I0105 20:10:02.817196 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjtwh" event={"ID":"3af58cc4-e753-4eb0-91c9-8b93516d665e","Type":"ContainerStarted","Data":"e702c1944d8c23b99c342319d216ec41facb7da26ecaa68beccecc41cdf9f253"} Jan 05 20:10:02 crc kubenswrapper[4754]: I0105 20:10:02.819534 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kk2wq" event={"ID":"dabb102c-20ff-4424-95d7-d26f22f594f5","Type":"ContainerStarted","Data":"a88a8aa3be23b0a224caafc340d54c640e60cc34f16442c1eae3a67f022adbbe"} Jan 05 20:10:02 crc kubenswrapper[4754]: I0105 20:10:02.839750 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tjtwh" podStartSLOduration=2.3510542660000002 podStartE2EDuration="4.839731324s" podCreationTimestamp="2026-01-05 20:09:58 +0000 UTC" firstStartedPulling="2026-01-05 20:09:59.785401275 +0000 UTC m=+286.494585169" lastFinishedPulling="2026-01-05 20:10:02.274078353 +0000 UTC m=+288.983262227" observedRunningTime="2026-01-05 20:10:02.837949356 +0000 UTC m=+289.547133240" watchObservedRunningTime="2026-01-05 20:10:02.839731324 +0000 UTC m=+289.548915208" Jan 05 20:10:02 crc kubenswrapper[4754]: I0105 20:10:02.864488 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kk2wq" podStartSLOduration=2.335062787 podStartE2EDuration="4.864463341s" podCreationTimestamp="2026-01-05 20:09:58 +0000 UTC" firstStartedPulling="2026-01-05 20:09:59.788863289 +0000 UTC m=+286.498047163" lastFinishedPulling="2026-01-05 20:10:02.318263843 +0000 UTC m=+289.027447717" observedRunningTime="2026-01-05 20:10:02.857803981 +0000 UTC m=+289.566987875" watchObservedRunningTime="2026-01-05 20:10:02.864463341 +0000 UTC m=+289.573647235" Jan 05 20:10:05 crc kubenswrapper[4754]: I0105 20:10:05.989025 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:10:05 crc kubenswrapper[4754]: I0105 20:10:05.989559 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.036823 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.170572 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.171472 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.212350 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.888526 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 20:10:06 crc kubenswrapper[4754]: I0105 20:10:06.900204 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.580285 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.580647 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.654126 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.732468 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.732601 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.781184 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.885105 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tjtwh" Jan 05 20:10:08 crc kubenswrapper[4754]: I0105 20:10:08.890997 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kk2wq" Jan 05 20:10:13 crc kubenswrapper[4754]: I0105 20:10:13.445977 4754 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.659187 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl"] Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.664876 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.672234 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl"] Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.677754 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.678088 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.680207 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.680341 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.680224 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.790853 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.791082 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxh2\" (UniqueName: \"kubernetes.io/projected/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-kube-api-access-dxxh2\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.791353 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.892908 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxh2\" (UniqueName: \"kubernetes.io/projected/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-kube-api-access-dxxh2\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.893085 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.893209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.895013 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.903434 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.915649 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxh2\" (UniqueName: \"kubernetes.io/projected/a22e8ba6-8f5e-44ac-8a27-410213c5f89d-kube-api-access-dxxh2\") pod \"cluster-monitoring-operator-6d5b84845-qtqhl\" (UID: \"a22e8ba6-8f5e-44ac-8a27-410213c5f89d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:24 crc kubenswrapper[4754]: I0105 20:10:24.993906 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" Jan 05 20:10:25 crc kubenswrapper[4754]: I0105 20:10:25.444047 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl"] Jan 05 20:10:25 crc kubenswrapper[4754]: W0105 20:10:25.451910 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda22e8ba6_8f5e_44ac_8a27_410213c5f89d.slice/crio-166f7f6980eea426339ee01a0a3ef5bd7926bd36e41a47a366f719ae7d01c533 WatchSource:0}: Error finding container 166f7f6980eea426339ee01a0a3ef5bd7926bd36e41a47a366f719ae7d01c533: Status 404 returned error can't find the container with id 166f7f6980eea426339ee01a0a3ef5bd7926bd36e41a47a366f719ae7d01c533 Jan 05 20:10:25 crc kubenswrapper[4754]: I0105 20:10:25.946793 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" event={"ID":"a22e8ba6-8f5e-44ac-8a27-410213c5f89d","Type":"ContainerStarted","Data":"166f7f6980eea426339ee01a0a3ef5bd7926bd36e41a47a366f719ae7d01c533"} Jan 05 20:10:27 crc kubenswrapper[4754]: I0105 20:10:27.976169 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" event={"ID":"a22e8ba6-8f5e-44ac-8a27-410213c5f89d","Type":"ContainerStarted","Data":"9b2d6ec7754cf88fe13d1c56d4ad7050fb4490356028fb911dd9a01b276753b2"} Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.005132 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-qtqhl" podStartSLOduration=1.974053708 podStartE2EDuration="4.005116004s" podCreationTimestamp="2026-01-05 20:10:24 +0000 UTC" firstStartedPulling="2026-01-05 20:10:25.454882179 +0000 UTC m=+312.164066073" lastFinishedPulling="2026-01-05 20:10:27.485944485 +0000 UTC m=+314.195128369" observedRunningTime="2026-01-05 20:10:28.001455306 +0000 UTC m=+314.710639180" watchObservedRunningTime="2026-01-05 20:10:28.005116004 +0000 UTC m=+314.714299878" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.134430 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw"] Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.135148 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.137357 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.137391 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-mb2w4" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.145622 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw"] Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.242300 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-679tw\" (UID: \"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.343392 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-679tw\" (UID: \"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:28 crc kubenswrapper[4754]: E0105 20:10:28.343659 4754 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 05 20:10:28 crc kubenswrapper[4754]: E0105 20:10:28.343761 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates podName:c0d4f5db-f43e-4812-8e5b-5f1efcdcb913 nodeName:}" failed. No retries permitted until 2026-01-05 20:10:28.843734639 +0000 UTC m=+315.552918543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-679tw" (UID: "c0d4f5db-f43e-4812-8e5b-5f1efcdcb913") : secret "prometheus-operator-admission-webhook-tls" not found Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.863572 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-679tw\" (UID: \"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:28 crc kubenswrapper[4754]: I0105 20:10:28.873830 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c0d4f5db-f43e-4812-8e5b-5f1efcdcb913-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-679tw\" (UID: \"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:29 crc kubenswrapper[4754]: I0105 20:10:29.056387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:29 crc kubenswrapper[4754]: I0105 20:10:29.557462 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw"] Jan 05 20:10:29 crc kubenswrapper[4754]: I0105 20:10:29.995961 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" event={"ID":"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913","Type":"ContainerStarted","Data":"6883fb64991ab52779e1c8f932ada9b9627f3935bcb035b991f1eb41641ee692"} Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.010217 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" event={"ID":"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913","Type":"ContainerStarted","Data":"cb6fdab77c6e4bf3f91368d83ea8c587a0648cf6c260160c98eff086b1dbc950"} Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.010786 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.021977 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.040850 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podStartSLOduration=2.432460099 podStartE2EDuration="4.040812466s" podCreationTimestamp="2026-01-05 20:10:28 +0000 UTC" firstStartedPulling="2026-01-05 20:10:29.574554273 +0000 UTC m=+316.283738157" lastFinishedPulling="2026-01-05 20:10:31.18290663 +0000 UTC m=+317.892090524" observedRunningTime="2026-01-05 20:10:32.033091568 +0000 UTC m=+318.742275512" watchObservedRunningTime="2026-01-05 20:10:32.040812466 +0000 UTC m=+318.749996380" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.202825 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-pbg7b"] Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.203975 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.205864 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.205989 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-wkr7d" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.206513 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.206535 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.219538 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-pbg7b"] Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.317549 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.317650 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfcc\" (UniqueName: \"kubernetes.io/projected/cb648e11-e066-40d6-8b92-9767a1677daa-kube-api-access-zcfcc\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.317690 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb648e11-e066-40d6-8b92-9767a1677daa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.317713 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.418619 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.418751 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfcc\" (UniqueName: \"kubernetes.io/projected/cb648e11-e066-40d6-8b92-9767a1677daa-kube-api-access-zcfcc\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.418811 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb648e11-e066-40d6-8b92-9767a1677daa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.418842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.420704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb648e11-e066-40d6-8b92-9767a1677daa-metrics-client-ca\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.431921 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.431987 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cb648e11-e066-40d6-8b92-9767a1677daa-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.453267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfcc\" (UniqueName: \"kubernetes.io/projected/cb648e11-e066-40d6-8b92-9767a1677daa-kube-api-access-zcfcc\") pod \"prometheus-operator-db54df47d-pbg7b\" (UID: \"cb648e11-e066-40d6-8b92-9767a1677daa\") " pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.520501 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" Jan 05 20:10:32 crc kubenswrapper[4754]: I0105 20:10:32.987056 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-pbg7b"] Jan 05 20:10:32 crc kubenswrapper[4754]: W0105 20:10:32.996523 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb648e11_e066_40d6_8b92_9767a1677daa.slice/crio-ad5e800cfe698b5b9b7616d670a9af17ab2e5cd96561f6b67bd2fbcab83ee528 WatchSource:0}: Error finding container ad5e800cfe698b5b9b7616d670a9af17ab2e5cd96561f6b67bd2fbcab83ee528: Status 404 returned error can't find the container with id ad5e800cfe698b5b9b7616d670a9af17ab2e5cd96561f6b67bd2fbcab83ee528 Jan 05 20:10:33 crc kubenswrapper[4754]: I0105 20:10:33.020672 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" event={"ID":"cb648e11-e066-40d6-8b92-9767a1677daa","Type":"ContainerStarted","Data":"ad5e800cfe698b5b9b7616d670a9af17ab2e5cd96561f6b67bd2fbcab83ee528"} Jan 05 20:10:36 crc kubenswrapper[4754]: I0105 20:10:36.049681 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" event={"ID":"cb648e11-e066-40d6-8b92-9767a1677daa","Type":"ContainerStarted","Data":"2a016004b608e1d929cbe4e7f868064b072f43555bf2b297cc61d2ac3acffdbf"} Jan 05 20:10:36 crc kubenswrapper[4754]: I0105 20:10:36.050640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" event={"ID":"cb648e11-e066-40d6-8b92-9767a1677daa","Type":"ContainerStarted","Data":"f29fc3d302b9fd26f812d51b9730b939251fa4072ae122a94c460f393cd07c1a"} Jan 05 20:10:36 crc kubenswrapper[4754]: I0105 20:10:36.079737 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-pbg7b" podStartSLOduration=1.960914125 podStartE2EDuration="4.079717515s" podCreationTimestamp="2026-01-05 20:10:32 +0000 UTC" firstStartedPulling="2026-01-05 20:10:32.999655413 +0000 UTC m=+319.708839317" lastFinishedPulling="2026-01-05 20:10:35.118458833 +0000 UTC m=+321.827642707" observedRunningTime="2026-01-05 20:10:36.075949393 +0000 UTC m=+322.785133287" watchObservedRunningTime="2026-01-05 20:10:36.079717515 +0000 UTC m=+322.788901399" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.594580 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd"] Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.595745 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.597726 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.604887 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-pdhsm" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.612530 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd"] Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.612773 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.630224 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-x7twt"] Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.631219 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.634600 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.634785 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-4fmfx" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.634782 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.649831 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j"] Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.650845 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.653371 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8x5cq" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.653726 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.655244 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.655412 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.673889 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j"] Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698034 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-tls\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698089 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-root\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698228 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-wtmp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698319 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c1a46f64-d05d-4872-828f-a9207d53fdca-metrics-client-ca\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698361 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d411eb97-d9ac-4b72-8bd4-3edbd0041657-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698406 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffjq\" (UniqueName: \"kubernetes.io/projected/d411eb97-d9ac-4b72-8bd4-3edbd0041657-kube-api-access-nffjq\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698448 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-textfile\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698489 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4hp\" (UniqueName: \"kubernetes.io/projected/c1a46f64-d05d-4872-828f-a9207d53fdca-kube-api-access-dj4hp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-sys\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.698531 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffjq\" (UniqueName: \"kubernetes.io/projected/d411eb97-d9ac-4b72-8bd4-3edbd0041657-kube-api-access-nffjq\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799502 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-textfile\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799536 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4hp\" (UniqueName: \"kubernetes.io/projected/c1a46f64-d05d-4872-828f-a9207d53fdca-kube-api-access-dj4hp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-sys\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799648 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-sys\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.799709 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/156006b9-3fa2-4e64-ab86-b5499e8769a3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800008 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-textfile\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800583 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-tls\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800614 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800637 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800664 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-root\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800725 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cz67\" (UniqueName: \"kubernetes.io/projected/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-api-access-5cz67\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800750 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800771 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-root\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800778 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-wtmp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800824 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800855 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c1a46f64-d05d-4872-828f-a9207d53fdca-metrics-client-ca\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800870 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-wtmp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.800999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d411eb97-d9ac-4b72-8bd4-3edbd0041657-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.801256 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.801510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c1a46f64-d05d-4872-828f-a9207d53fdca-metrics-client-ca\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.802541 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d411eb97-d9ac-4b72-8bd4-3edbd0041657-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.806603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.806658 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-tls\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.815663 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4hp\" (UniqueName: \"kubernetes.io/projected/c1a46f64-d05d-4872-828f-a9207d53fdca-kube-api-access-dj4hp\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.818889 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffjq\" (UniqueName: \"kubernetes.io/projected/d411eb97-d9ac-4b72-8bd4-3edbd0041657-kube-api-access-nffjq\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.823668 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c1a46f64-d05d-4872-828f-a9207d53fdca-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7twt\" (UID: \"c1a46f64-d05d-4872-828f-a9207d53fdca\") " pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.823912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d411eb97-d9ac-4b72-8bd4-3edbd0041657-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-n8ccd\" (UID: \"d411eb97-d9ac-4b72-8bd4-3edbd0041657\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902051 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cz67\" (UniqueName: \"kubernetes.io/projected/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-api-access-5cz67\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902127 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902186 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/156006b9-3fa2-4e64-ab86-b5499e8769a3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902223 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.902639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/156006b9-3fa2-4e64-ab86-b5499e8769a3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.903282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.903350 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.906975 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.907428 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.908374 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.931924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cz67\" (UniqueName: \"kubernetes.io/projected/156006b9-3fa2-4e64-ab86-b5499e8769a3-kube-api-access-5cz67\") pod \"kube-state-metrics-777cb5bd5d-8hj7j\" (UID: \"156006b9-3fa2-4e64-ab86-b5499e8769a3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.946872 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7twt" Jan 05 20:10:37 crc kubenswrapper[4754]: I0105 20:10:37.963081 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.066788 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7twt" event={"ID":"c1a46f64-d05d-4872-828f-a9207d53fdca","Type":"ContainerStarted","Data":"b037991907acd8ea29d6849eafb39d270558e21ae64bcdabae3b15d7bfaeff03"} Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.409098 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd"] Jan 05 20:10:38 crc kubenswrapper[4754]: W0105 20:10:38.413043 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd411eb97_d9ac_4b72_8bd4_3edbd0041657.slice/crio-1f0ab309abf95f8d50366bdf6bdf76212150da460d45463813d170cb5eef3d70 WatchSource:0}: Error finding container 1f0ab309abf95f8d50366bdf6bdf76212150da460d45463813d170cb5eef3d70: Status 404 returned error can't find the container with id 1f0ab309abf95f8d50366bdf6bdf76212150da460d45463813d170cb5eef3d70 Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.473785 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j"] Jan 05 20:10:38 crc kubenswrapper[4754]: W0105 20:10:38.486830 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod156006b9_3fa2_4e64_ab86_b5499e8769a3.slice/crio-5a02ff120e82475f9963670c08e5d665c8f591cf8f034c494c4e39a726ed9828 WatchSource:0}: Error finding container 5a02ff120e82475f9963670c08e5d665c8f591cf8f034c494c4e39a726ed9828: Status 404 returned error can't find the container with id 5a02ff120e82475f9963670c08e5d665c8f591cf8f034c494c4e39a726ed9828 Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.721815 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.723960 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.729865 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.729921 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.730399 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.730401 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.730533 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.730551 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.730919 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.732446 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-t8g85" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.743954 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.790167 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.821368 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.821633 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.821721 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7h5\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-kube-api-access-4b7h5\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.821853 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-web-config\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.821950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-config-volume\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822039 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822106 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822178 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822347 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-config-out\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822436 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.822517 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.923978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924073 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7h5\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-kube-api-access-4b7h5\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924126 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-web-config\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924168 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-config-volume\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924210 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924269 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924324 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924348 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-config-out\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924408 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924443 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.924479 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.926525 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.929232 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.929844 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b35d7e1f-26c3-4d29-8108-8536fa361112-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.937736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-web-config\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.937906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.938098 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.938227 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.938543 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.942550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b35d7e1f-26c3-4d29-8108-8536fa361112-config-out\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.943103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-config-volume\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.946580 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7h5\" (UniqueName: \"kubernetes.io/projected/b35d7e1f-26c3-4d29-8108-8536fa361112-kube-api-access-4b7h5\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:38 crc kubenswrapper[4754]: I0105 20:10:38.955397 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b35d7e1f-26c3-4d29-8108-8536fa361112-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b35d7e1f-26c3-4d29-8108-8536fa361112\") " pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.045994 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.073474 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" event={"ID":"156006b9-3fa2-4e64-ab86-b5499e8769a3","Type":"ContainerStarted","Data":"5a02ff120e82475f9963670c08e5d665c8f591cf8f034c494c4e39a726ed9828"} Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.076703 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" event={"ID":"d411eb97-d9ac-4b72-8bd4-3edbd0041657","Type":"ContainerStarted","Data":"dc0013fa79e2f09c270a9d40efc79444e6a1ec7f889c52ba5f89a0d5b41aa847"} Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.076738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" event={"ID":"d411eb97-d9ac-4b72-8bd4-3edbd0041657","Type":"ContainerStarted","Data":"dfc1a2e9588d46df8c7b0c65cbc4753ef27e71369a7e8c914cd5cecd28387f6e"} Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.076753 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" event={"ID":"d411eb97-d9ac-4b72-8bd4-3edbd0041657","Type":"ContainerStarted","Data":"1f0ab309abf95f8d50366bdf6bdf76212150da460d45463813d170cb5eef3d70"} Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.564270 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 05 20:10:39 crc kubenswrapper[4754]: W0105 20:10:39.584277 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb35d7e1f_26c3_4d29_8108_8536fa361112.slice/crio-614718f2c9353c06d2362859061c92167d545f4244df8c7d258dfe2a0e1778c9 WatchSource:0}: Error finding container 614718f2c9353c06d2362859061c92167d545f4244df8c7d258dfe2a0e1778c9: Status 404 returned error can't find the container with id 614718f2c9353c06d2362859061c92167d545f4244df8c7d258dfe2a0e1778c9 Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.714984 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-869b668f44-6cplm"] Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.719586 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.723073 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.723544 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.723630 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.723753 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-xlt4n" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.724023 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.724033 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.727329 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-ite95dog98gg" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.737602 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-869b668f44-6cplm"] Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841518 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0597c753-3003-4d69-ad03-7e12215f7274-metrics-client-ca\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841688 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841787 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841844 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-grpc-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.841898 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmc8\" (UniqueName: \"kubernetes.io/projected/0597c753-3003-4d69-ad03-7e12215f7274-kube-api-access-zfmc8\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.842075 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.842142 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.943710 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.944186 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-grpc-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.944355 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmc8\" (UniqueName: \"kubernetes.io/projected/0597c753-3003-4d69-ad03-7e12215f7274-kube-api-access-zfmc8\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.944453 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.945597 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.945878 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.945949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0597c753-3003-4d69-ad03-7e12215f7274-metrics-client-ca\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.946065 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.951902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0597c753-3003-4d69-ad03-7e12215f7274-metrics-client-ca\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.952483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.953269 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-grpc-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.953282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.957984 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-tls\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.958529 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.964872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/0597c753-3003-4d69-ad03-7e12215f7274-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:39 crc kubenswrapper[4754]: I0105 20:10:39.972096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmc8\" (UniqueName: \"kubernetes.io/projected/0597c753-3003-4d69-ad03-7e12215f7274-kube-api-access-zfmc8\") pod \"thanos-querier-869b668f44-6cplm\" (UID: \"0597c753-3003-4d69-ad03-7e12215f7274\") " pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:40 crc kubenswrapper[4754]: I0105 20:10:40.059363 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:40 crc kubenswrapper[4754]: I0105 20:10:40.083986 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"614718f2c9353c06d2362859061c92167d545f4244df8c7d258dfe2a0e1778c9"} Jan 05 20:10:41 crc kubenswrapper[4754]: I0105 20:10:41.135330 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-869b668f44-6cplm"] Jan 05 20:10:41 crc kubenswrapper[4754]: W0105 20:10:41.141948 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0597c753_3003_4d69_ad03_7e12215f7274.slice/crio-68211160666e61a80caef73f05524e620e7fc69c8d03d96409ca3b228ca837ff WatchSource:0}: Error finding container 68211160666e61a80caef73f05524e620e7fc69c8d03d96409ca3b228ca837ff: Status 404 returned error can't find the container with id 68211160666e61a80caef73f05524e620e7fc69c8d03d96409ca3b228ca837ff Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.099765 4754 generic.go:334] "Generic (PLEG): container finished" podID="b35d7e1f-26c3-4d29-8108-8536fa361112" containerID="b7f19d939171cebe24caca2d87b8f8851589e1a47f9ccf44d2388c28299a2790" exitCode=0 Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.100496 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerDied","Data":"b7f19d939171cebe24caca2d87b8f8851589e1a47f9ccf44d2388c28299a2790"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.106601 4754 generic.go:334] "Generic (PLEG): container finished" podID="c1a46f64-d05d-4872-828f-a9207d53fdca" containerID="52daae2e4d49770f4bf8aaf53489b57543221e7ef8588c4b7d9fd4e0b7eb88b2" exitCode=0 Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.106701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7twt" event={"ID":"c1a46f64-d05d-4872-828f-a9207d53fdca","Type":"ContainerDied","Data":"52daae2e4d49770f4bf8aaf53489b57543221e7ef8588c4b7d9fd4e0b7eb88b2"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.109515 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" event={"ID":"156006b9-3fa2-4e64-ab86-b5499e8769a3","Type":"ContainerStarted","Data":"b15c893b651a6df1c6fc8f96421647c5a93c66e09156383dae1921dcf466c3c9"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.109542 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" event={"ID":"156006b9-3fa2-4e64-ab86-b5499e8769a3","Type":"ContainerStarted","Data":"0a97d79a37e24f60a8f78141fb75ae4b6c5343e028b33fe1dfd9724bffafcf78"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.109555 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" event={"ID":"156006b9-3fa2-4e64-ab86-b5499e8769a3","Type":"ContainerStarted","Data":"e817c556e469cec561fabf2dc83ed0d5e6e746993a4df12e6579cff43618a7c2"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.113427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" event={"ID":"d411eb97-d9ac-4b72-8bd4-3edbd0041657","Type":"ContainerStarted","Data":"79e37a10663ff6e22cc0e560216e3172a243519bbf9d459d8b50d7298eb894fe"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.115268 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"68211160666e61a80caef73f05524e620e7fc69c8d03d96409ca3b228ca837ff"} Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.201011 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-n8ccd" podStartSLOduration=3.025776981 podStartE2EDuration="5.200989742s" podCreationTimestamp="2026-01-05 20:10:37 +0000 UTC" firstStartedPulling="2026-01-05 20:10:38.713247785 +0000 UTC m=+325.422431659" lastFinishedPulling="2026-01-05 20:10:40.888460546 +0000 UTC m=+327.597644420" observedRunningTime="2026-01-05 20:10:42.191830395 +0000 UTC m=+328.901014269" watchObservedRunningTime="2026-01-05 20:10:42.200989742 +0000 UTC m=+328.910173616" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.216124 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-8hj7j" podStartSLOduration=2.821646642 podStartE2EDuration="5.21610463s" podCreationTimestamp="2026-01-05 20:10:37 +0000 UTC" firstStartedPulling="2026-01-05 20:10:38.489233379 +0000 UTC m=+325.198417263" lastFinishedPulling="2026-01-05 20:10:40.883691337 +0000 UTC m=+327.592875251" observedRunningTime="2026-01-05 20:10:42.214692921 +0000 UTC m=+328.923876815" watchObservedRunningTime="2026-01-05 20:10:42.21610463 +0000 UTC m=+328.925288494" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.926779 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-9b97c7f7b-p9nz9"] Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.927680 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.930468 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.931677 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.931773 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-x6kzt" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.931941 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.931996 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.935376 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cpgmfdq54rf51" Jan 05 20:10:42 crc kubenswrapper[4754]: I0105 20:10:42.941974 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-9b97c7f7b-p9nz9"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.015637 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qx7z\" (UniqueName: \"kubernetes.io/projected/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-kube-api-access-2qx7z\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.015815 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.015889 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-client-certs\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.016042 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-audit-log\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.016139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-client-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.016184 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-server-tls\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.016266 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-metrics-server-audit-profiles\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.124847 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-client-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.126958 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-server-tls\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.127105 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-metrics-server-audit-profiles\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.127649 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qx7z\" (UniqueName: \"kubernetes.io/projected/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-kube-api-access-2qx7z\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.127765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.127839 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-client-certs\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.127880 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-audit-log\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.129020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-audit-log\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.131661 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-metrics-server-audit-profiles\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.132491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.134029 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-client-ca-bundle\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.134795 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.136191 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.137659 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-server-tls\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.143106 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.150858 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-secret-metrics-client-certs\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.154599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7twt" event={"ID":"c1a46f64-d05d-4872-828f-a9207d53fdca","Type":"ContainerStarted","Data":"018ddce763a52b451727d669d284cd2560c17fef4023fe74d40a3cbb4746be20"} Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.166897 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qx7z\" (UniqueName: \"kubernetes.io/projected/c0ea00d6-9ff0-4a23-8481-369189bdf8f5-kube-api-access-2qx7z\") pod \"metrics-server-9b97c7f7b-p9nz9\" (UID: \"c0ea00d6-9ff0-4a23-8481-369189bdf8f5\") " pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229151 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229212 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229232 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229272 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42dh5\" (UniqueName: \"kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.229458 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.262587 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330798 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330871 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330899 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330923 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330943 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.330966 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.331020 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42dh5\" (UniqueName: \"kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.332328 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.332719 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.333156 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.333176 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.334473 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.336803 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.345623 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42dh5\" (UniqueName: \"kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5\") pod \"console-6dff6fbd94-bkrpf\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.377590 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-68874c8d69-scmxl"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.378239 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.379854 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.380126 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.387138 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-68874c8d69-scmxl"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.515830 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.533129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f31e8d59-677f-4203-9104-bc930b45022e-monitoring-plugin-cert\") pod \"monitoring-plugin-68874c8d69-scmxl\" (UID: \"f31e8d59-677f-4203-9104-bc930b45022e\") " pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.636914 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f31e8d59-677f-4203-9104-bc930b45022e-monitoring-plugin-cert\") pod \"monitoring-plugin-68874c8d69-scmxl\" (UID: \"f31e8d59-677f-4203-9104-bc930b45022e\") " pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.643258 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f31e8d59-677f-4203-9104-bc930b45022e-monitoring-plugin-cert\") pod \"monitoring-plugin-68874c8d69-scmxl\" (UID: \"f31e8d59-677f-4203-9104-bc930b45022e\") " pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.708713 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.952146 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.953952 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.956794 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.956941 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.957048 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.957406 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.957525 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-gqs9s" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.957641 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.958038 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.958359 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.958644 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.958778 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.958890 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-73fbh1ak7dcst" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.964867 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 05 20:10:43 crc kubenswrapper[4754]: I0105 20:10:43.981941 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.010407 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.042717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.043833 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.043992 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-web-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config-out\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044236 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xd9\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-kube-api-access-q6xd9\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044355 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044475 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044577 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044693 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044813 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.044939 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045075 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045360 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045472 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045676 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.045774 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146827 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-web-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146878 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config-out\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146903 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xd9\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-kube-api-access-q6xd9\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146941 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.146986 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147003 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147027 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147050 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147072 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147104 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147119 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147154 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147203 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.147223 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.148044 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.150751 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.151517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.153060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.153317 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-web-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.153655 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.154041 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.156753 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.156955 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.157861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.157905 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.158420 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.158641 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.161620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.162437 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.163243 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.175394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-config-out\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.176548 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xd9\" (UniqueName: \"kubernetes.io/projected/99d5dfe0-666a-44dc-b93c-5c92ef395bcc-kube-api-access-q6xd9\") pod \"prometheus-k8s-0\" (UID: \"99d5dfe0-666a-44dc-b93c-5c92ef395bcc\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:44 crc kubenswrapper[4754]: I0105 20:10:44.285403 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:45 crc kubenswrapper[4754]: I0105 20:10:45.240993 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-68874c8d69-scmxl"] Jan 05 20:10:45 crc kubenswrapper[4754]: I0105 20:10:45.523659 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:10:45 crc kubenswrapper[4754]: I0105 20:10:45.531186 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-9b97c7f7b-p9nz9"] Jan 05 20:10:45 crc kubenswrapper[4754]: W0105 20:10:45.549098 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ea00d6_9ff0_4a23_8481_369189bdf8f5.slice/crio-ce333c43126997d89a782b220c93923ec6a787408b2f4fa9f587f16bd8149e8d WatchSource:0}: Error finding container ce333c43126997d89a782b220c93923ec6a787408b2f4fa9f587f16bd8149e8d: Status 404 returned error can't find the container with id ce333c43126997d89a782b220c93923ec6a787408b2f4fa9f587f16bd8149e8d Jan 05 20:10:45 crc kubenswrapper[4754]: I0105 20:10:45.552448 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 05 20:10:45 crc kubenswrapper[4754]: W0105 20:10:45.558904 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99d5dfe0_666a_44dc_b93c_5c92ef395bcc.slice/crio-382ba81bfa79299eadcb2095d7ef193d7f7ad89eacc491534298734a58597f2e WatchSource:0}: Error finding container 382ba81bfa79299eadcb2095d7ef193d7f7ad89eacc491534298734a58597f2e: Status 404 returned error can't find the container with id 382ba81bfa79299eadcb2095d7ef193d7f7ad89eacc491534298734a58597f2e Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.175468 4754 generic.go:334] "Generic (PLEG): container finished" podID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerID="45481ee723489a3b1c4bd1953022c0ae9a3210fa157b9aa89f78f28fe9a9453b" exitCode=0 Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.175952 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerDied","Data":"45481ee723489a3b1c4bd1953022c0ae9a3210fa157b9aa89f78f28fe9a9453b"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.175983 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"382ba81bfa79299eadcb2095d7ef193d7f7ad89eacc491534298734a58597f2e"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.188027 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"9c0985040a615757c7791138fdd69e65ab105aaa21bb2aaa2d808a0f954ce2e5"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.188081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"f05a2e9001cfc9dc225d902992206f469be4f6d0e815bd1eaaad905e7516475b"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.188095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"451ebeea1dd6539070f7f3ea02381745407e1faefcac982d674f031a24c82ead"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.188108 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"519ce1f1e7056a47624883e77f80ed179a6f81b89dba6431f622e12df21d2da4"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.188120 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"798bd69fb1e07ee96b85d878fcad3c222b7c18392709ade9ae5292aef53482dd"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.190309 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"0f193c24a535ffefacc1e5852109124c7148c8e5c5d55ec6303451d6f3abd9ae"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.190335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"d1650f583ab5ad09575c48c59de44face1a313b367ce018c8b6d893c83536111"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.190345 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"193d03a53885eb757c7af2ac6dab2813a83832e2143a5110afb05a189da5c62b"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.195789 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7twt" event={"ID":"c1a46f64-d05d-4872-828f-a9207d53fdca","Type":"ContainerStarted","Data":"189ce8b4edd6cf686a968406a0e048b617e2aeef38cde8712fdb9d267b05aac9"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.200500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" event={"ID":"c0ea00d6-9ff0-4a23-8481-369189bdf8f5","Type":"ContainerStarted","Data":"ce333c43126997d89a782b220c93923ec6a787408b2f4fa9f587f16bd8149e8d"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.202814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dff6fbd94-bkrpf" event={"ID":"2b41f989-d795-4968-a73f-920b4350d3c7","Type":"ContainerStarted","Data":"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.202850 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dff6fbd94-bkrpf" event={"ID":"2b41f989-d795-4968-a73f-920b4350d3c7","Type":"ContainerStarted","Data":"525044c2c47e9c5fbd57ae7014cc7a3d6a92efe0727250e48d7857c8b0480642"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.216678 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" event={"ID":"f31e8d59-677f-4203-9104-bc930b45022e","Type":"ContainerStarted","Data":"7fc05ebae9414c23364c0953a9060d5d7cd6fb52538e3ad298a2c980e3479e6f"} Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.238206 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-x7twt" podStartSLOduration=6.338994145 podStartE2EDuration="9.238186564s" podCreationTimestamp="2026-01-05 20:10:37 +0000 UTC" firstStartedPulling="2026-01-05 20:10:37.996280886 +0000 UTC m=+324.705464760" lastFinishedPulling="2026-01-05 20:10:40.895473305 +0000 UTC m=+327.604657179" observedRunningTime="2026-01-05 20:10:46.232427839 +0000 UTC m=+332.941611723" watchObservedRunningTime="2026-01-05 20:10:46.238186564 +0000 UTC m=+332.947370438" Jan 05 20:10:46 crc kubenswrapper[4754]: I0105 20:10:46.253818 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6dff6fbd94-bkrpf" podStartSLOduration=3.253801655 podStartE2EDuration="3.253801655s" podCreationTimestamp="2026-01-05 20:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:10:46.247459524 +0000 UTC m=+332.956643418" watchObservedRunningTime="2026-01-05 20:10:46.253801655 +0000 UTC m=+332.962985529" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.109335 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.110026 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.231649 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" event={"ID":"c0ea00d6-9ff0-4a23-8481-369189bdf8f5","Type":"ContainerStarted","Data":"0127b0e3b2a13d363788c275d2f7f1f35f056b1662cab085f19531da86a5b50a"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.233467 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" event={"ID":"f31e8d59-677f-4203-9104-bc930b45022e","Type":"ContainerStarted","Data":"5170d808dafa7ad799b2dc9a72fc0f1589d115b65fa87493294d4b9f3564fabd"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.233682 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.241731 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b35d7e1f-26c3-4d29-8108-8536fa361112","Type":"ContainerStarted","Data":"9fb583d6bef0c07d474b3e59294d45e116a6219ae045c29cf749587f390d0483"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.243837 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.250876 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"25b5d061aae8b6b6c865607e381ab777c21ea99f8850eda1c4ebf4b51521fe06"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.250941 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"ea14fab3bf01cac7bba3f790a70227776a8456e03d531d6c97ea99baa47248d4"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.250953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" event={"ID":"0597c753-3003-4d69-ad03-7e12215f7274","Type":"ContainerStarted","Data":"b72cf58923092f5e56c23033789678a89137395512df61ee3449e33d8f3a71d4"} Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.251235 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.261185 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" podStartSLOduration=4.3293398100000005 podStartE2EDuration="6.261166943s" podCreationTimestamp="2026-01-05 20:10:42 +0000 UTC" firstStartedPulling="2026-01-05 20:10:45.554746609 +0000 UTC m=+332.263930503" lastFinishedPulling="2026-01-05 20:10:47.486573722 +0000 UTC m=+334.195757636" observedRunningTime="2026-01-05 20:10:48.256100367 +0000 UTC m=+334.965284261" watchObservedRunningTime="2026-01-05 20:10:48.261166943 +0000 UTC m=+334.970350817" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.294704 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.398415253 podStartE2EDuration="10.294679386s" podCreationTimestamp="2026-01-05 20:10:38 +0000 UTC" firstStartedPulling="2026-01-05 20:10:39.5918636 +0000 UTC m=+326.301047494" lastFinishedPulling="2026-01-05 20:10:47.488127703 +0000 UTC m=+334.197311627" observedRunningTime="2026-01-05 20:10:48.282232311 +0000 UTC m=+334.991416225" watchObservedRunningTime="2026-01-05 20:10:48.294679386 +0000 UTC m=+335.003863270" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.314450 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" podStartSLOduration=3.12080963 podStartE2EDuration="5.314434038s" podCreationTimestamp="2026-01-05 20:10:43 +0000 UTC" firstStartedPulling="2026-01-05 20:10:45.269141023 +0000 UTC m=+331.978324897" lastFinishedPulling="2026-01-05 20:10:47.462765391 +0000 UTC m=+334.171949305" observedRunningTime="2026-01-05 20:10:48.313592266 +0000 UTC m=+335.022776140" watchObservedRunningTime="2026-01-05 20:10:48.314434038 +0000 UTC m=+335.023617912" Jan 05 20:10:48 crc kubenswrapper[4754]: I0105 20:10:48.370654 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podStartSLOduration=3.032816202 podStartE2EDuration="9.370631993s" podCreationTimestamp="2026-01-05 20:10:39 +0000 UTC" firstStartedPulling="2026-01-05 20:10:41.151910735 +0000 UTC m=+327.861094609" lastFinishedPulling="2026-01-05 20:10:47.489726516 +0000 UTC m=+334.198910400" observedRunningTime="2026-01-05 20:10:48.369952814 +0000 UTC m=+335.079136688" watchObservedRunningTime="2026-01-05 20:10:48.370631993 +0000 UTC m=+335.079815877" Jan 05 20:10:50 crc kubenswrapper[4754]: I0105 20:10:50.092228 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" Jan 05 20:10:51 crc kubenswrapper[4754]: I0105 20:10:51.274128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"c1ec862729bd0e48beaac9dcdf862b76ed564df68d5ab1aee8c4903e844d3b6a"} Jan 05 20:10:51 crc kubenswrapper[4754]: I0105 20:10:51.274563 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"b27aa520314af0dfd6eb8af7de97a8b8247b633e908c12145d8b361b9f0afc18"} Jan 05 20:10:51 crc kubenswrapper[4754]: I0105 20:10:51.274588 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"c0d1d5d163c941b6275325f5b2f6614e1de346dc6e5975c1929543e2a6145607"} Jan 05 20:10:51 crc kubenswrapper[4754]: I0105 20:10:51.274605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"58812e3a9a829416cafd57cee1c9910d4ee6d89eb43df99de96b8f48cc146ba0"} Jan 05 20:10:51 crc kubenswrapper[4754]: I0105 20:10:51.274622 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"1c74905829919ad4c36829f60eb6a64d803d6ffc883acf39bbfa49f3e014d7b4"} Jan 05 20:10:52 crc kubenswrapper[4754]: I0105 20:10:52.286958 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"99d5dfe0-666a-44dc-b93c-5c92ef395bcc","Type":"ContainerStarted","Data":"88377a9d528173bd716d85cfccdb0b66dbd19d362b413d8fc98061b445754c34"} Jan 05 20:10:52 crc kubenswrapper[4754]: I0105 20:10:52.367993 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.154504429 podStartE2EDuration="9.367975341s" podCreationTimestamp="2026-01-05 20:10:43 +0000 UTC" firstStartedPulling="2026-01-05 20:10:46.17788409 +0000 UTC m=+332.887067964" lastFinishedPulling="2026-01-05 20:10:50.391354962 +0000 UTC m=+337.100538876" observedRunningTime="2026-01-05 20:10:52.367463387 +0000 UTC m=+339.076647281" watchObservedRunningTime="2026-01-05 20:10:52.367975341 +0000 UTC m=+339.077159225" Jan 05 20:10:53 crc kubenswrapper[4754]: I0105 20:10:53.518596 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:53 crc kubenswrapper[4754]: I0105 20:10:53.519562 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:53 crc kubenswrapper[4754]: I0105 20:10:53.529827 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:54 crc kubenswrapper[4754]: I0105 20:10:54.285625 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:10:54 crc kubenswrapper[4754]: I0105 20:10:54.309626 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:10:54 crc kubenswrapper[4754]: I0105 20:10:54.390492 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:11:03 crc kubenswrapper[4754]: I0105 20:11:03.263404 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:11:03 crc kubenswrapper[4754]: I0105 20:11:03.263837 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:11:18 crc kubenswrapper[4754]: I0105 20:11:18.109399 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:11:18 crc kubenswrapper[4754]: I0105 20:11:18.109821 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:11:19 crc kubenswrapper[4754]: I0105 20:11:19.444895 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4xgft" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" containerID="cri-o://5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1" gracePeriod=15 Jan 05 20:11:19 crc kubenswrapper[4754]: I0105 20:11:19.903024 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xgft_f3ea7eb1-87d5-476b-bb30-2c94421afc41/console/0.log" Jan 05 20:11:19 crc kubenswrapper[4754]: I0105 20:11:19.903521 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008672 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008759 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008792 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008824 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc7wt\" (UniqueName: \"kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008867 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.008913 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert\") pod \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\" (UID: \"f3ea7eb1-87d5-476b-bb30-2c94421afc41\") " Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.009969 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.010126 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.010407 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config" (OuterVolumeSpecName: "console-config") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.011376 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca" (OuterVolumeSpecName: "service-ca") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.015743 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.020147 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.036455 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt" (OuterVolumeSpecName: "kube-api-access-vc7wt") pod "f3ea7eb1-87d5-476b-bb30-2c94421afc41" (UID: "f3ea7eb1-87d5-476b-bb30-2c94421afc41"). InnerVolumeSpecName "kube-api-access-vc7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110261 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc7wt\" (UniqueName: \"kubernetes.io/projected/f3ea7eb1-87d5-476b-bb30-2c94421afc41-kube-api-access-vc7wt\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110322 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110332 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110341 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110351 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3ea7eb1-87d5-476b-bb30-2c94421afc41-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110359 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.110367 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ea7eb1-87d5-476b-bb30-2c94421afc41-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520157 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xgft_f3ea7eb1-87d5-476b-bb30-2c94421afc41/console/0.log" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520207 4754 generic.go:334] "Generic (PLEG): container finished" podID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerID="5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1" exitCode=2 Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520239 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xgft" event={"ID":"f3ea7eb1-87d5-476b-bb30-2c94421afc41","Type":"ContainerDied","Data":"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1"} Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520270 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xgft" event={"ID":"f3ea7eb1-87d5-476b-bb30-2c94421afc41","Type":"ContainerDied","Data":"e3bf420bd99de0e76d634ed9d8b91f9f1c4bd0fbf4094740ba1132c6f1e1adf8"} Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520310 4754 scope.go:117] "RemoveContainer" containerID="5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.520454 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xgft" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.546640 4754 scope.go:117] "RemoveContainer" containerID="5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1" Jan 05 20:11:20 crc kubenswrapper[4754]: E0105 20:11:20.547115 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1\": container with ID starting with 5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1 not found: ID does not exist" containerID="5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.547173 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1"} err="failed to get container status \"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1\": rpc error: code = NotFound desc = could not find container \"5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1\": container with ID starting with 5a8a7393d6210226c031e2fcdbef9c6e9ba701f495c385f5d6454c209a9908c1 not found: ID does not exist" Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.559492 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:11:20 crc kubenswrapper[4754]: I0105 20:11:20.564280 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4xgft"] Jan 05 20:11:21 crc kubenswrapper[4754]: I0105 20:11:21.599874 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" path="/var/lib/kubelet/pods/f3ea7eb1-87d5-476b-bb30-2c94421afc41/volumes" Jan 05 20:11:23 crc kubenswrapper[4754]: I0105 20:11:23.273431 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:11:23 crc kubenswrapper[4754]: I0105 20:11:23.281081 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" Jan 05 20:11:44 crc kubenswrapper[4754]: I0105 20:11:44.286168 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:11:44 crc kubenswrapper[4754]: I0105 20:11:44.337814 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:11:44 crc kubenswrapper[4754]: I0105 20:11:44.777359 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.109029 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.109557 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.109655 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.110693 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.110795 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91" gracePeriod=600 Jan 05 20:11:48 crc kubenswrapper[4754]: E0105 20:11:48.225805 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:11:48 crc kubenswrapper[4754]: E0105 20:11:48.225832 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.764873 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91" exitCode=0 Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.765022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91"} Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.765281 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320"} Jan 05 20:11:48 crc kubenswrapper[4754]: I0105 20:11:48.765322 4754 scope.go:117] "RemoveContainer" containerID="56b7d7218a5605f87f91e22b9cc79e416eedb63b257d6326203830990e6ddc5c" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.127588 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:12:05 crc kubenswrapper[4754]: E0105 20:12:05.128727 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.128754 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.129720 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ea7eb1-87d5-476b-bb30-2c94421afc41" containerName="console" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.130689 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.156260 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.173990 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r52h\" (UniqueName: \"kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174042 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174075 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174107 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174138 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.174154 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275522 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275633 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275777 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.275907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r52h\" (UniqueName: \"kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.276878 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.277061 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.277227 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.277246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.283630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.284111 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.296226 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r52h\" (UniqueName: \"kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h\") pod \"console-7c45669d6f-99bq7\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.456110 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.746763 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:12:05 crc kubenswrapper[4754]: I0105 20:12:05.908214 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c45669d6f-99bq7" event={"ID":"f7928ad3-b615-4da0-a301-7d74c2802904","Type":"ContainerStarted","Data":"506bca2cc45e23fe62d22ce7d30a50d5c9e071ec9009b0b6fe66d613dceabf3e"} Jan 05 20:12:09 crc kubenswrapper[4754]: I0105 20:12:09.940293 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c45669d6f-99bq7" event={"ID":"f7928ad3-b615-4da0-a301-7d74c2802904","Type":"ContainerStarted","Data":"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0"} Jan 05 20:12:09 crc kubenswrapper[4754]: I0105 20:12:09.967712 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c45669d6f-99bq7" podStartSLOduration=4.967692043 podStartE2EDuration="4.967692043s" podCreationTimestamp="2026-01-05 20:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:12:09.967511619 +0000 UTC m=+416.676695533" watchObservedRunningTime="2026-01-05 20:12:09.967692043 +0000 UTC m=+416.676875937" Jan 05 20:12:18 crc kubenswrapper[4754]: I0105 20:12:15.457367 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:18 crc kubenswrapper[4754]: I0105 20:12:15.463969 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:18 crc kubenswrapper[4754]: I0105 20:12:15.470117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:18 crc kubenswrapper[4754]: I0105 20:12:15.997415 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:12:18 crc kubenswrapper[4754]: I0105 20:12:16.083023 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.148752 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6dff6fbd94-bkrpf" podUID="2b41f989-d795-4968-a73f-920b4350d3c7" containerName="console" containerID="cri-o://c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1" gracePeriod=15 Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.553770 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6dff6fbd94-bkrpf_2b41f989-d795-4968-a73f-920b4350d3c7/console/0.log" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.554165 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700152 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700281 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42dh5\" (UniqueName: \"kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700328 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700359 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700446 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700481 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.700509 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert\") pod \"2b41f989-d795-4968-a73f-920b4350d3c7\" (UID: \"2b41f989-d795-4968-a73f-920b4350d3c7\") " Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.701245 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca" (OuterVolumeSpecName: "service-ca") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.701259 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.701363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config" (OuterVolumeSpecName: "console-config") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.701427 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.705672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5" (OuterVolumeSpecName: "kube-api-access-42dh5") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "kube-api-access-42dh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.706192 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.706234 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2b41f989-d795-4968-a73f-920b4350d3c7" (UID: "2b41f989-d795-4968-a73f-920b4350d3c7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802556 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802608 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802623 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802636 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b41f989-d795-4968-a73f-920b4350d3c7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802647 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42dh5\" (UniqueName: \"kubernetes.io/projected/2b41f989-d795-4968-a73f-920b4350d3c7-kube-api-access-42dh5\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802661 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:41 crc kubenswrapper[4754]: I0105 20:12:41.802673 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2b41f989-d795-4968-a73f-920b4350d3c7-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188629 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6dff6fbd94-bkrpf_2b41f989-d795-4968-a73f-920b4350d3c7/console/0.log" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188745 4754 generic.go:334] "Generic (PLEG): container finished" podID="2b41f989-d795-4968-a73f-920b4350d3c7" containerID="c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1" exitCode=2 Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dff6fbd94-bkrpf" event={"ID":"2b41f989-d795-4968-a73f-920b4350d3c7","Type":"ContainerDied","Data":"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1"} Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188891 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dff6fbd94-bkrpf" event={"ID":"2b41f989-d795-4968-a73f-920b4350d3c7","Type":"ContainerDied","Data":"525044c2c47e9c5fbd57ae7014cc7a3d6a92efe0727250e48d7857c8b0480642"} Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188939 4754 scope.go:117] "RemoveContainer" containerID="c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.188998 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dff6fbd94-bkrpf" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.219793 4754 scope.go:117] "RemoveContainer" containerID="c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1" Jan 05 20:12:42 crc kubenswrapper[4754]: E0105 20:12:42.220671 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1\": container with ID starting with c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1 not found: ID does not exist" containerID="c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.220738 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1"} err="failed to get container status \"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1\": rpc error: code = NotFound desc = could not find container \"c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1\": container with ID starting with c92951d47ccf6368745bc8f8c8cfbc73d26c265743d9c9e9cc278f26282b29f1 not found: ID does not exist" Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.244162 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:12:42 crc kubenswrapper[4754]: I0105 20:12:42.249167 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6dff6fbd94-bkrpf"] Jan 05 20:12:43 crc kubenswrapper[4754]: I0105 20:12:43.600471 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b41f989-d795-4968-a73f-920b4350d3c7" path="/var/lib/kubelet/pods/2b41f989-d795-4968-a73f-920b4350d3c7/volumes" Jan 05 20:13:48 crc kubenswrapper[4754]: I0105 20:13:48.109018 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:13:48 crc kubenswrapper[4754]: I0105 20:13:48.110173 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:14:18 crc kubenswrapper[4754]: I0105 20:14:18.109443 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:14:18 crc kubenswrapper[4754]: I0105 20:14:18.110571 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:14:48 crc kubenswrapper[4754]: I0105 20:14:48.108940 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:14:48 crc kubenswrapper[4754]: I0105 20:14:48.109672 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:14:48 crc kubenswrapper[4754]: I0105 20:14:48.109738 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:14:48 crc kubenswrapper[4754]: I0105 20:14:48.110516 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:14:48 crc kubenswrapper[4754]: I0105 20:14:48.110616 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320" gracePeriod=600 Jan 05 20:14:48 crc kubenswrapper[4754]: E0105 20:14:48.181192 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:14:48 crc kubenswrapper[4754]: E0105 20:14:48.181322 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:14:49 crc kubenswrapper[4754]: I0105 20:14:49.233916 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320" exitCode=0 Jan 05 20:14:49 crc kubenswrapper[4754]: I0105 20:14:49.234149 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320"} Jan 05 20:14:49 crc kubenswrapper[4754]: I0105 20:14:49.234338 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a"} Jan 05 20:14:49 crc kubenswrapper[4754]: I0105 20:14:49.234370 4754 scope.go:117] "RemoveContainer" containerID="c1f64966f183ebad7c2a6c9fd69efae0e7933296b4a18be79d5bf6fe79950c91" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.178444 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4"] Jan 05 20:15:00 crc kubenswrapper[4754]: E0105 20:15:00.179409 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b41f989-d795-4968-a73f-920b4350d3c7" containerName="console" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.179433 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b41f989-d795-4968-a73f-920b4350d3c7" containerName="console" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.179639 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b41f989-d795-4968-a73f-920b4350d3c7" containerName="console" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.180355 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.183796 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.188353 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4"] Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.192531 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.267719 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs852\" (UniqueName: \"kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.267798 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.267826 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.369578 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.369633 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.369732 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs852\" (UniqueName: \"kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.371480 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.380698 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.399038 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs852\" (UniqueName: \"kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852\") pod \"collect-profiles-29460735-t7mv4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.502890 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:00 crc kubenswrapper[4754]: I0105 20:15:00.729557 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4"] Jan 05 20:15:01 crc kubenswrapper[4754]: I0105 20:15:01.334033 4754 generic.go:334] "Generic (PLEG): container finished" podID="74d086b5-d537-4b86-94a3-34dd18984ee4" containerID="c2b2ec3ce83a2d059e121ec1271c585ef7269382bf42852e6c2b4950ddd4d17f" exitCode=0 Jan 05 20:15:01 crc kubenswrapper[4754]: I0105 20:15:01.334157 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" event={"ID":"74d086b5-d537-4b86-94a3-34dd18984ee4","Type":"ContainerDied","Data":"c2b2ec3ce83a2d059e121ec1271c585ef7269382bf42852e6c2b4950ddd4d17f"} Jan 05 20:15:01 crc kubenswrapper[4754]: I0105 20:15:01.334477 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" event={"ID":"74d086b5-d537-4b86-94a3-34dd18984ee4","Type":"ContainerStarted","Data":"9538f6ec37ebd42e64655d43aa221e693f3a20c9c71b6039eb0ed6b39a1a5a5f"} Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.658723 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.811053 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs852\" (UniqueName: \"kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852\") pod \"74d086b5-d537-4b86-94a3-34dd18984ee4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.811732 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume\") pod \"74d086b5-d537-4b86-94a3-34dd18984ee4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.811882 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume\") pod \"74d086b5-d537-4b86-94a3-34dd18984ee4\" (UID: \"74d086b5-d537-4b86-94a3-34dd18984ee4\") " Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.812739 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume" (OuterVolumeSpecName: "config-volume") pod "74d086b5-d537-4b86-94a3-34dd18984ee4" (UID: "74d086b5-d537-4b86-94a3-34dd18984ee4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.812910 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74d086b5-d537-4b86-94a3-34dd18984ee4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.817207 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852" (OuterVolumeSpecName: "kube-api-access-cs852") pod "74d086b5-d537-4b86-94a3-34dd18984ee4" (UID: "74d086b5-d537-4b86-94a3-34dd18984ee4"). InnerVolumeSpecName "kube-api-access-cs852". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.818723 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74d086b5-d537-4b86-94a3-34dd18984ee4" (UID: "74d086b5-d537-4b86-94a3-34dd18984ee4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.913493 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74d086b5-d537-4b86-94a3-34dd18984ee4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:15:02 crc kubenswrapper[4754]: I0105 20:15:02.913531 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs852\" (UniqueName: \"kubernetes.io/projected/74d086b5-d537-4b86-94a3-34dd18984ee4-kube-api-access-cs852\") on node \"crc\" DevicePath \"\"" Jan 05 20:15:03 crc kubenswrapper[4754]: I0105 20:15:03.355084 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" event={"ID":"74d086b5-d537-4b86-94a3-34dd18984ee4","Type":"ContainerDied","Data":"9538f6ec37ebd42e64655d43aa221e693f3a20c9c71b6039eb0ed6b39a1a5a5f"} Jan 05 20:15:03 crc kubenswrapper[4754]: I0105 20:15:03.355136 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9538f6ec37ebd42e64655d43aa221e693f3a20c9c71b6039eb0ed6b39a1a5a5f" Jan 05 20:15:03 crc kubenswrapper[4754]: I0105 20:15:03.355210 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4" Jan 05 20:16:09 crc kubenswrapper[4754]: I0105 20:16:09.704129 4754 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.098846 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng"] Jan 05 20:16:30 crc kubenswrapper[4754]: E0105 20:16:30.100093 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d086b5-d537-4b86-94a3-34dd18984ee4" containerName="collect-profiles" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.100120 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d086b5-d537-4b86-94a3-34dd18984ee4" containerName="collect-profiles" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.100353 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d086b5-d537-4b86-94a3-34dd18984ee4" containerName="collect-profiles" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.102231 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.105451 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.109913 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng"] Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.161494 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjx5b\" (UniqueName: \"kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.161609 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.161652 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.262576 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.262698 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.262829 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjx5b\" (UniqueName: \"kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.263549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.263584 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.302650 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjx5b\" (UniqueName: \"kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:30 crc kubenswrapper[4754]: I0105 20:16:30.431192 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:31 crc kubenswrapper[4754]: I0105 20:16:31.028930 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng"] Jan 05 20:16:31 crc kubenswrapper[4754]: I0105 20:16:31.053320 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" event={"ID":"73705d65-cc7d-4dd7-88ec-8d699ab40cfc","Type":"ContainerStarted","Data":"a3a8b447d4f778d4bf39e9e955510b272f06a57e4b20d1bf5d50a240994c0efa"} Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.064450 4754 generic.go:334] "Generic (PLEG): container finished" podID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerID="6e1d89e2baf64d7f75489db7f502d1d45bbef21088173a1da2f4ff67d0898821" exitCode=0 Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.064523 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" event={"ID":"73705d65-cc7d-4dd7-88ec-8d699ab40cfc","Type":"ContainerDied","Data":"6e1d89e2baf64d7f75489db7f502d1d45bbef21088173a1da2f4ff67d0898821"} Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.067473 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.439804 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.442048 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.453624 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.503081 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkbs\" (UniqueName: \"kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.503207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.503257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.605837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkbs\" (UniqueName: \"kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.605950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.605990 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.607356 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.607612 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.631042 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkbs\" (UniqueName: \"kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs\") pod \"redhat-operators-cjpgv\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.775552 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:32 crc kubenswrapper[4754]: I0105 20:16:32.998921 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:33 crc kubenswrapper[4754]: I0105 20:16:33.070909 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerStarted","Data":"66699a19897d3d4d5903b072837e4ff16f03038594acb1ccb2260f85d0219bcf"} Jan 05 20:16:34 crc kubenswrapper[4754]: I0105 20:16:34.078865 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerID="543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53" exitCode=0 Jan 05 20:16:34 crc kubenswrapper[4754]: I0105 20:16:34.078947 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerDied","Data":"543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53"} Jan 05 20:16:34 crc kubenswrapper[4754]: I0105 20:16:34.081826 4754 generic.go:334] "Generic (PLEG): container finished" podID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerID="15bd1ab03805c017ca51b5dc6e57f10bc69caf45875f463dcf36f8fbaf68a6e9" exitCode=0 Jan 05 20:16:34 crc kubenswrapper[4754]: I0105 20:16:34.081899 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" event={"ID":"73705d65-cc7d-4dd7-88ec-8d699ab40cfc","Type":"ContainerDied","Data":"15bd1ab03805c017ca51b5dc6e57f10bc69caf45875f463dcf36f8fbaf68a6e9"} Jan 05 20:16:35 crc kubenswrapper[4754]: I0105 20:16:35.101777 4754 generic.go:334] "Generic (PLEG): container finished" podID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerID="9814169e729316bb67f2d123cd50b393ceeabf8bce1f8c9dfa462b7a5cb5cba1" exitCode=0 Jan 05 20:16:35 crc kubenswrapper[4754]: I0105 20:16:35.101886 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" event={"ID":"73705d65-cc7d-4dd7-88ec-8d699ab40cfc","Type":"ContainerDied","Data":"9814169e729316bb67f2d123cd50b393ceeabf8bce1f8c9dfa462b7a5cb5cba1"} Jan 05 20:16:35 crc kubenswrapper[4754]: I0105 20:16:35.108180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerStarted","Data":"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a"} Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.119330 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerID="8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a" exitCode=0 Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.119458 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerDied","Data":"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a"} Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.418285 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.472252 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util\") pod \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.472336 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle\") pod \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.472446 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjx5b\" (UniqueName: \"kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b\") pod \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\" (UID: \"73705d65-cc7d-4dd7-88ec-8d699ab40cfc\") " Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.477876 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle" (OuterVolumeSpecName: "bundle") pod "73705d65-cc7d-4dd7-88ec-8d699ab40cfc" (UID: "73705d65-cc7d-4dd7-88ec-8d699ab40cfc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.480005 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b" (OuterVolumeSpecName: "kube-api-access-tjx5b") pod "73705d65-cc7d-4dd7-88ec-8d699ab40cfc" (UID: "73705d65-cc7d-4dd7-88ec-8d699ab40cfc"). InnerVolumeSpecName "kube-api-access-tjx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.575023 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjx5b\" (UniqueName: \"kubernetes.io/projected/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-kube-api-access-tjx5b\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.575079 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.643265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util" (OuterVolumeSpecName: "util") pod "73705d65-cc7d-4dd7-88ec-8d699ab40cfc" (UID: "73705d65-cc7d-4dd7-88ec-8d699ab40cfc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:16:36 crc kubenswrapper[4754]: I0105 20:16:36.677917 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73705d65-cc7d-4dd7-88ec-8d699ab40cfc-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:37 crc kubenswrapper[4754]: I0105 20:16:37.136781 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" event={"ID":"73705d65-cc7d-4dd7-88ec-8d699ab40cfc","Type":"ContainerDied","Data":"a3a8b447d4f778d4bf39e9e955510b272f06a57e4b20d1bf5d50a240994c0efa"} Jan 05 20:16:37 crc kubenswrapper[4754]: I0105 20:16:37.136882 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a8b447d4f778d4bf39e9e955510b272f06a57e4b20d1bf5d50a240994c0efa" Jan 05 20:16:37 crc kubenswrapper[4754]: I0105 20:16:37.136909 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng" Jan 05 20:16:38 crc kubenswrapper[4754]: I0105 20:16:38.148650 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerStarted","Data":"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54"} Jan 05 20:16:38 crc kubenswrapper[4754]: I0105 20:16:38.182197 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjpgv" podStartSLOduration=3.298330567 podStartE2EDuration="6.182169131s" podCreationTimestamp="2026-01-05 20:16:32 +0000 UTC" firstStartedPulling="2026-01-05 20:16:34.081038274 +0000 UTC m=+680.790222148" lastFinishedPulling="2026-01-05 20:16:36.964876798 +0000 UTC m=+683.674060712" observedRunningTime="2026-01-05 20:16:38.181431801 +0000 UTC m=+684.890615725" watchObservedRunningTime="2026-01-05 20:16:38.182169131 +0000 UTC m=+684.891353035" Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985154 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nnbc8"] Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985737 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-controller" containerID="cri-o://9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985874 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="northd" containerID="cri-o://72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985915 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-node" containerID="cri-o://87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985875 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.985985 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-acl-logging" containerID="cri-o://fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.986115 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="nbdb" containerID="cri-o://6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" gracePeriod=30 Jan 05 20:16:40 crc kubenswrapper[4754]: I0105 20:16:40.986077 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="sbdb" containerID="cri-o://54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" gracePeriod=30 Jan 05 20:16:41 crc kubenswrapper[4754]: I0105 20:16:41.031123 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovnkube-controller" containerID="cri-o://f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" gracePeriod=30 Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.213454 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-acl-logging/0.log" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.214941 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-controller/0.log" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215623 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" exitCode=0 Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215645 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" exitCode=0 Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215652 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" exitCode=0 Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215659 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" exitCode=143 Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215709 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.215730 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.674159 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-acl-logging/0.log" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.674684 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-controller/0.log" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.675127 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.776549 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.776650 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807620 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807678 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807700 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807756 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807787 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807812 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807744 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807845 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807836 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807876 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807777 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash" (OuterVolumeSpecName: "host-slash") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807793 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.807916 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808039 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808096 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808129 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808170 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket" (OuterVolumeSpecName: "log-socket") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808192 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808195 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808201 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808237 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808256 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq6mk\" (UniqueName: \"kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808275 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808310 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808395 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808434 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808459 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log\") pod \"65d4d365-a206-444c-b906-46a645aeaaf7\" (UID: \"65d4d365-a206-444c-b906-46a645aeaaf7\") " Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808481 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808575 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808588 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808649 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808745 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log" (OuterVolumeSpecName: "node-log") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.808817 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809206 4754 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809225 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809239 4754 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809250 4754 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809261 4754 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809272 4754 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809282 4754 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-node-log\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809310 4754 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809321 4754 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809332 4754 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-slash\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809341 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809351 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65d4d365-a206-444c-b906-46a645aeaaf7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809366 4754 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809375 4754 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809391 4754 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809401 4754 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-log-socket\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.809411 4754 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.835351 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.838533 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk" (OuterVolumeSpecName: "kube-api-access-fq6mk") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "kube-api-access-fq6mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.849907 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "65d4d365-a206-444c-b906-46a645aeaaf7" (UID: "65d4d365-a206-444c-b906-46a645aeaaf7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889194 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tm7l6"] Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889419 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kubecfg-setup" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889431 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kubecfg-setup" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889444 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-node" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889450 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-node" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889460 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="nbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889468 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="nbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889475 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="sbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889480 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="sbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889490 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovnkube-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889497 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovnkube-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889510 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="northd" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889516 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="northd" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889526 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-acl-logging" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889532 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-acl-logging" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889540 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889546 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889555 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889561 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889570 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="util" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889576 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="util" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889582 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="extract" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889588 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="extract" Jan 05 20:16:42 crc kubenswrapper[4754]: E0105 20:16:42.889597 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="pull" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889603 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="pull" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889701 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-acl-logging" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889717 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="northd" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889726 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovnkube-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889734 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-node" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889743 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="nbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889753 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="ovn-controller" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889762 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889771 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="73705d65-cc7d-4dd7-88ec-8d699ab40cfc" containerName="extract" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.889779 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" containerName="sbdb" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.891677 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.910330 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65d4d365-a206-444c-b906-46a645aeaaf7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.910358 4754 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65d4d365-a206-444c-b906-46a645aeaaf7-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:42 crc kubenswrapper[4754]: I0105 20:16:42.910370 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq6mk\" (UniqueName: \"kubernetes.io/projected/65d4d365-a206-444c-b906-46a645aeaaf7-kube-api-access-fq6mk\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.010992 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-netd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011032 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-config\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011051 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-slash\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011081 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-script-lib\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011104 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011123 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgng\" (UniqueName: \"kubernetes.io/projected/872ccf21-f97e-4899-bc61-3531536b8afb-kube-api-access-wvgng\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011267 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-var-lib-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011308 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-netns\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011332 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-systemd-units\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011350 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-kubelet\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011580 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-systemd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011708 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-bin\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011792 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-node-log\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011837 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ccf21-f97e-4899-bc61-3531536b8afb-ovn-node-metrics-cert\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011935 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-ovn\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.011985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.012010 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-env-overrides\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.012030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-log-socket\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.012060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-etc-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-systemd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-bin\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-node-log\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ccf21-f97e-4899-bc61-3531536b8afb-ovn-node-metrics-cert\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113336 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-ovn\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-env-overrides\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113380 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-log-socket\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113394 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-etc-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113409 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-netd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113424 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-config\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-slash\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113462 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-script-lib\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113481 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113498 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgng\" (UniqueName: \"kubernetes.io/projected/872ccf21-f97e-4899-bc61-3531536b8afb-kube-api-access-wvgng\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-var-lib-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113539 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-netns\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113563 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-systemd-units\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113582 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-kubelet\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113611 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113675 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-systemd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113734 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-bin\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.113754 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-node-log\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114237 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-slash\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114273 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-etc-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114311 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-cni-netd\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114344 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-log-socket\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114546 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-run-ovn\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114618 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-var-lib-openvswitch\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114645 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114776 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-systemd-units\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114890 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-run-netns\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114956 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-config\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/872ccf21-f97e-4899-bc61-3531536b8afb-host-kubelet\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.114922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-env-overrides\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.115222 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/872ccf21-f97e-4899-bc61-3531536b8afb-ovnkube-script-lib\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.119113 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ccf21-f97e-4899-bc61-3531536b8afb-ovn-node-metrics-cert\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.137622 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgng\" (UniqueName: \"kubernetes.io/projected/872ccf21-f97e-4899-bc61-3531536b8afb-kube-api-access-wvgng\") pod \"ovnkube-node-tm7l6\" (UID: \"872ccf21-f97e-4899-bc61-3531536b8afb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.210756 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.226155 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zkfjx_fd02bbe9-6d27-434c-995a-3a2ca424d245/kube-multus/0.log" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.226216 4754 generic.go:334] "Generic (PLEG): container finished" podID="fd02bbe9-6d27-434c-995a-3a2ca424d245" containerID="dfcbc6c76187a6e9e294463df3222a9a9bbe319c59aea069f4aa2430fd18f64d" exitCode=2 Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.226281 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zkfjx" event={"ID":"fd02bbe9-6d27-434c-995a-3a2ca424d245","Type":"ContainerDied","Data":"dfcbc6c76187a6e9e294463df3222a9a9bbe319c59aea069f4aa2430fd18f64d"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.226910 4754 scope.go:117] "RemoveContainer" containerID="dfcbc6c76187a6e9e294463df3222a9a9bbe319c59aea069f4aa2430fd18f64d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.232730 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-acl-logging/0.log" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.237702 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nnbc8_65d4d365-a206-444c-b906-46a645aeaaf7/ovn-controller/0.log" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.243886 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" exitCode=0 Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.243923 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" exitCode=0 Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.243934 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" exitCode=0 Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.243943 4754 generic.go:334] "Generic (PLEG): container finished" podID="65d4d365-a206-444c-b906-46a645aeaaf7" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" exitCode=143 Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244119 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244178 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244249 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244262 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nnbc8" event={"ID":"65d4d365-a206-444c-b906-46a645aeaaf7","Type":"ContainerDied","Data":"5f78fa066f9e3030072cebb2c5e6be0dc88a586a277f6fb4c9420dc8a5133581"} Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.244310 4754 scope.go:117] "RemoveContainer" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.328523 4754 scope.go:117] "RemoveContainer" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.394558 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nnbc8"] Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.403511 4754 scope.go:117] "RemoveContainer" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.428204 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nnbc8"] Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.430885 4754 scope.go:117] "RemoveContainer" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.466490 4754 scope.go:117] "RemoveContainer" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.505354 4754 scope.go:117] "RemoveContainer" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.571665 4754 scope.go:117] "RemoveContainer" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.591371 4754 scope.go:117] "RemoveContainer" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.599733 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d4d365-a206-444c-b906-46a645aeaaf7" path="/var/lib/kubelet/pods/65d4d365-a206-444c-b906-46a645aeaaf7/volumes" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.625362 4754 scope.go:117] "RemoveContainer" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.648361 4754 scope.go:117] "RemoveContainer" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.649022 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": container with ID starting with f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f not found: ID does not exist" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.649069 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} err="failed to get container status \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": rpc error: code = NotFound desc = could not find container \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": container with ID starting with f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.649103 4754 scope.go:117] "RemoveContainer" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.649623 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": container with ID starting with 54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd not found: ID does not exist" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.649663 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} err="failed to get container status \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": rpc error: code = NotFound desc = could not find container \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": container with ID starting with 54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.649694 4754 scope.go:117] "RemoveContainer" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.650273 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": container with ID starting with 6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74 not found: ID does not exist" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.650325 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} err="failed to get container status \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": rpc error: code = NotFound desc = could not find container \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": container with ID starting with 6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.650340 4754 scope.go:117] "RemoveContainer" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.650668 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": container with ID starting with 72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d not found: ID does not exist" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.650713 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} err="failed to get container status \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": rpc error: code = NotFound desc = could not find container \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": container with ID starting with 72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.650744 4754 scope.go:117] "RemoveContainer" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.651087 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": container with ID starting with 562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b not found: ID does not exist" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.651115 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} err="failed to get container status \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": rpc error: code = NotFound desc = could not find container \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": container with ID starting with 562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.651134 4754 scope.go:117] "RemoveContainer" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.651615 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": container with ID starting with 87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5 not found: ID does not exist" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.651698 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} err="failed to get container status \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": rpc error: code = NotFound desc = could not find container \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": container with ID starting with 87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.651750 4754 scope.go:117] "RemoveContainer" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.652094 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": container with ID starting with fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b not found: ID does not exist" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652124 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} err="failed to get container status \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": rpc error: code = NotFound desc = could not find container \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": container with ID starting with fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652148 4754 scope.go:117] "RemoveContainer" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.652484 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": container with ID starting with 9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8 not found: ID does not exist" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652518 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} err="failed to get container status \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": rpc error: code = NotFound desc = could not find container \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": container with ID starting with 9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652539 4754 scope.go:117] "RemoveContainer" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: E0105 20:16:43.652832 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": container with ID starting with c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49 not found: ID does not exist" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652858 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49"} err="failed to get container status \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": rpc error: code = NotFound desc = could not find container \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": container with ID starting with c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.652877 4754 scope.go:117] "RemoveContainer" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653250 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} err="failed to get container status \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": rpc error: code = NotFound desc = could not find container \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": container with ID starting with f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653317 4754 scope.go:117] "RemoveContainer" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653592 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} err="failed to get container status \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": rpc error: code = NotFound desc = could not find container \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": container with ID starting with 54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653620 4754 scope.go:117] "RemoveContainer" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653881 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} err="failed to get container status \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": rpc error: code = NotFound desc = could not find container \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": container with ID starting with 6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.653903 4754 scope.go:117] "RemoveContainer" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654170 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} err="failed to get container status \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": rpc error: code = NotFound desc = could not find container \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": container with ID starting with 72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654190 4754 scope.go:117] "RemoveContainer" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654465 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} err="failed to get container status \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": rpc error: code = NotFound desc = could not find container \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": container with ID starting with 562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654487 4754 scope.go:117] "RemoveContainer" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654734 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} err="failed to get container status \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": rpc error: code = NotFound desc = could not find container \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": container with ID starting with 87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.654759 4754 scope.go:117] "RemoveContainer" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655030 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} err="failed to get container status \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": rpc error: code = NotFound desc = could not find container \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": container with ID starting with fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655070 4754 scope.go:117] "RemoveContainer" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655339 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} err="failed to get container status \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": rpc error: code = NotFound desc = could not find container \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": container with ID starting with 9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655362 4754 scope.go:117] "RemoveContainer" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655629 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49"} err="failed to get container status \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": rpc error: code = NotFound desc = could not find container \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": container with ID starting with c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655650 4754 scope.go:117] "RemoveContainer" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655897 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} err="failed to get container status \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": rpc error: code = NotFound desc = could not find container \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": container with ID starting with f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.655917 4754 scope.go:117] "RemoveContainer" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656171 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} err="failed to get container status \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": rpc error: code = NotFound desc = could not find container \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": container with ID starting with 54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656210 4754 scope.go:117] "RemoveContainer" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656532 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} err="failed to get container status \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": rpc error: code = NotFound desc = could not find container \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": container with ID starting with 6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656553 4754 scope.go:117] "RemoveContainer" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656824 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} err="failed to get container status \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": rpc error: code = NotFound desc = could not find container \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": container with ID starting with 72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.656868 4754 scope.go:117] "RemoveContainer" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657225 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} err="failed to get container status \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": rpc error: code = NotFound desc = could not find container \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": container with ID starting with 562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657248 4754 scope.go:117] "RemoveContainer" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657562 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} err="failed to get container status \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": rpc error: code = NotFound desc = could not find container \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": container with ID starting with 87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657588 4754 scope.go:117] "RemoveContainer" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657845 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} err="failed to get container status \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": rpc error: code = NotFound desc = could not find container \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": container with ID starting with fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.657867 4754 scope.go:117] "RemoveContainer" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.658255 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} err="failed to get container status \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": rpc error: code = NotFound desc = could not find container \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": container with ID starting with 9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.658346 4754 scope.go:117] "RemoveContainer" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.660403 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49"} err="failed to get container status \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": rpc error: code = NotFound desc = could not find container \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": container with ID starting with c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.660439 4754 scope.go:117] "RemoveContainer" containerID="f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.660750 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f"} err="failed to get container status \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": rpc error: code = NotFound desc = could not find container \"f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f\": container with ID starting with f802d543d9afeaaa4701207a73c24c69aac64fd0ff6828db7268e91e7aa41a5f not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.660786 4754 scope.go:117] "RemoveContainer" containerID="54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661076 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd"} err="failed to get container status \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": rpc error: code = NotFound desc = could not find container \"54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd\": container with ID starting with 54b9789da0f84dc7076dfad5b77d345a92d9ccd5d6712eb5d5bd603c8f12c9bd not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661154 4754 scope.go:117] "RemoveContainer" containerID="6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661617 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74"} err="failed to get container status \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": rpc error: code = NotFound desc = could not find container \"6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74\": container with ID starting with 6676e8e9aaa9b0227a86d3578f04b0aac66e04a01e2938fe839e34ae9ea7ad74 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661649 4754 scope.go:117] "RemoveContainer" containerID="72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661943 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d"} err="failed to get container status \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": rpc error: code = NotFound desc = could not find container \"72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d\": container with ID starting with 72f1375fb2f1f0ef880bdc4251982635cb8d64df6222e4446be06dc74972ac9d not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.661965 4754 scope.go:117] "RemoveContainer" containerID="562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.662309 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b"} err="failed to get container status \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": rpc error: code = NotFound desc = could not find container \"562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b\": container with ID starting with 562018e62756e4d806e10bffd883e0d48dd06d63536c49955f69346f7330f22b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.662342 4754 scope.go:117] "RemoveContainer" containerID="87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.662657 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5"} err="failed to get container status \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": rpc error: code = NotFound desc = could not find container \"87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5\": container with ID starting with 87a442110d6e01bbbf02c516d39048dc69b3e8f5b878020d3a3d793eec0b83e5 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.662682 4754 scope.go:117] "RemoveContainer" containerID="fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.663015 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b"} err="failed to get container status \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": rpc error: code = NotFound desc = could not find container \"fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b\": container with ID starting with fb190592654feb1d6db07b00eb24382c65016d4a33d27ea1091b6e6055a7692b not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.663041 4754 scope.go:117] "RemoveContainer" containerID="9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.663280 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8"} err="failed to get container status \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": rpc error: code = NotFound desc = could not find container \"9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8\": container with ID starting with 9bd17ad3481cfcbeb056db17a088d9cb6a94cbe476761bdb7c920a4d7b724ae8 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.663319 4754 scope.go:117] "RemoveContainer" containerID="c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.663602 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49"} err="failed to get container status \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": rpc error: code = NotFound desc = could not find container \"c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49\": container with ID starting with c2a0e5825103650cedf2de937b5d81817e48a68dbd5f22161cdf0017bc5ffd49 not found: ID does not exist" Jan 05 20:16:43 crc kubenswrapper[4754]: I0105 20:16:43.850920 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjpgv" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="registry-server" probeResult="failure" output=< Jan 05 20:16:43 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:16:43 crc kubenswrapper[4754]: > Jan 05 20:16:44 crc kubenswrapper[4754]: I0105 20:16:44.253413 4754 generic.go:334] "Generic (PLEG): container finished" podID="872ccf21-f97e-4899-bc61-3531536b8afb" containerID="00d8a785d1dc0d5b500b43910cf10b55c71753d81fdb6cb1d7941741be1c88de" exitCode=0 Jan 05 20:16:44 crc kubenswrapper[4754]: I0105 20:16:44.253484 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerDied","Data":"00d8a785d1dc0d5b500b43910cf10b55c71753d81fdb6cb1d7941741be1c88de"} Jan 05 20:16:44 crc kubenswrapper[4754]: I0105 20:16:44.253548 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"9f8f86ecc2f8287a9164b80a7d5d49644d670211a4b1e302230a3a0f40698df3"} Jan 05 20:16:44 crc kubenswrapper[4754]: I0105 20:16:44.256653 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zkfjx_fd02bbe9-6d27-434c-995a-3a2ca424d245/kube-multus/0.log" Jan 05 20:16:44 crc kubenswrapper[4754]: I0105 20:16:44.256746 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zkfjx" event={"ID":"fd02bbe9-6d27-434c-995a-3a2ca424d245","Type":"ContainerStarted","Data":"e58f68bd74cbe38620393315080bcbd3fa758d84ad5356e29fb66c3d94f2f0ef"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.275824 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"1180c339e1c01f3a830981c7e89684107df3c89409bc659bc187e789a990d389"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.276802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"3f1633bccc1ed70413097427b32af07b9231959524af07badab4539502ee964d"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.276816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"efc4253bf78a63fc98ed91aec2bb0dc06c46fbfb2852482e372b1dbd2a3088eb"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.276853 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"9ce3cdeedddfd3aeee3fc7b55004899196136ec14b99de58bdf10b8ef0e0c8c9"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.276862 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"5d21007337dc70641c6c47cdd2b59f9405af8a8081edb119e27ab01fc04f003c"} Jan 05 20:16:45 crc kubenswrapper[4754]: I0105 20:16:45.276871 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"4b735186465058a2174a9f33aee0dc64931d93e7ec36d973d04aa3798c628133"} Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.577143 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd"] Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.578135 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.580487 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.581964 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.582502 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cqps2" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.693125 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vch2j\" (UniqueName: \"kubernetes.io/projected/d67e65cb-cb5a-4721-ba27-3b40ce273ee5-kube-api-access-vch2j\") pod \"obo-prometheus-operator-68bc856cb9-2v5pd\" (UID: \"d67e65cb-cb5a-4721-ba27-3b40ce273ee5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.795837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vch2j\" (UniqueName: \"kubernetes.io/projected/d67e65cb-cb5a-4721-ba27-3b40ce273ee5-kube-api-access-vch2j\") pod \"obo-prometheus-operator-68bc856cb9-2v5pd\" (UID: \"d67e65cb-cb5a-4721-ba27-3b40ce273ee5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.827770 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vch2j\" (UniqueName: \"kubernetes.io/projected/d67e65cb-cb5a-4721-ba27-3b40ce273ee5-kube-api-access-vch2j\") pod \"obo-prometheus-operator-68bc856cb9-2v5pd\" (UID: \"d67e65cb-cb5a-4721-ba27-3b40ce273ee5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: I0105 20:16:47.894598 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: E0105 20:16:47.934676 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(3671d8e48ae1293afe3034e0d1d393f8dd79215a0eefb2c37e01d467f2ae32d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:47 crc kubenswrapper[4754]: E0105 20:16:47.934769 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(3671d8e48ae1293afe3034e0d1d393f8dd79215a0eefb2c37e01d467f2ae32d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: E0105 20:16:47.934790 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(3671d8e48ae1293afe3034e0d1d393f8dd79215a0eefb2c37e01d467f2ae32d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:47 crc kubenswrapper[4754]: E0105 20:16:47.934842 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators(d67e65cb-cb5a-4721-ba27-3b40ce273ee5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators(d67e65cb-cb5a-4721-ba27-3b40ce273ee5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(3671d8e48ae1293afe3034e0d1d393f8dd79215a0eefb2c37e01d467f2ae32d9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" podUID="d67e65cb-cb5a-4721-ba27-3b40ce273ee5" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.109090 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.109175 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.764480 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd"] Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.775008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.778311 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.778741 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-f2k95" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.782713 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf"] Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.783766 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.869610 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-27lzx"] Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.870338 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.872240 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2hct2" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.873135 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.911877 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.912134 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.912418 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.912573 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.973156 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9bgtx"] Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.973991 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:48 crc kubenswrapper[4754]: I0105 20:16:48.976135 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-w5fdj" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014081 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014191 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014222 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014250 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014285 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdd2\" (UniqueName: \"kubernetes.io/projected/96d98210-f390-413e-8fe4-96ec610d2071-kube-api-access-9qdd2\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.014351 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/96d98210-f390-413e-8fe4-96ec610d2071-observability-operator-tls\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.022969 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.022969 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.026010 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29717236-39b9-4568-8df9-028b84d46be8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf\" (UID: \"29717236-39b9-4568-8df9-028b84d46be8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.037374 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a56084cf-6295-48a4-baec-a0fe4f306658-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd\" (UID: \"a56084cf-6295-48a4-baec-a0fe4f306658\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.093824 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.110653 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.116116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/f834cfd3-fa41-4790-92c1-3b80d98241af-kube-api-access-d5l2l\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.116422 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f834cfd3-fa41-4790-92c1-3b80d98241af-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.116594 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdd2\" (UniqueName: \"kubernetes.io/projected/96d98210-f390-413e-8fe4-96ec610d2071-kube-api-access-9qdd2\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.116788 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/96d98210-f390-413e-8fe4-96ec610d2071-observability-operator-tls\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.121894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/96d98210-f390-413e-8fe4-96ec610d2071-observability-operator-tls\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.141120 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdd2\" (UniqueName: \"kubernetes.io/projected/96d98210-f390-413e-8fe4-96ec610d2071-kube-api-access-9qdd2\") pod \"observability-operator-59bdc8b94-27lzx\" (UID: \"96d98210-f390-413e-8fe4-96ec610d2071\") " pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.143557 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(6fcb77f912a854ace8e9328f6cf8e78c75f3d16232af32fc1559e4c9a5726df8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.143620 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(6fcb77f912a854ace8e9328f6cf8e78c75f3d16232af32fc1559e4c9a5726df8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.143653 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(6fcb77f912a854ace8e9328f6cf8e78c75f3d16232af32fc1559e4c9a5726df8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.143705 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators(a56084cf-6295-48a4-baec-a0fe4f306658)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators(a56084cf-6295-48a4-baec-a0fe4f306658)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(6fcb77f912a854ace8e9328f6cf8e78c75f3d16232af32fc1559e4c9a5726df8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" podUID="a56084cf-6295-48a4-baec-a0fe4f306658" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.164513 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(c3fa044c974f1d9f2ed291f6ca5c38672367274999493f5a783ba88a7813dc7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.164619 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(c3fa044c974f1d9f2ed291f6ca5c38672367274999493f5a783ba88a7813dc7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.164646 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(c3fa044c974f1d9f2ed291f6ca5c38672367274999493f5a783ba88a7813dc7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.164722 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators(29717236-39b9-4568-8df9-028b84d46be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators(29717236-39b9-4568-8df9-028b84d46be8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(c3fa044c974f1d9f2ed291f6ca5c38672367274999493f5a783ba88a7813dc7a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" podUID="29717236-39b9-4568-8df9-028b84d46be8" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.185317 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.206493 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(cc463141914a6829028087b6155ecbefa9c9a8922d55484554bb8b53048ca284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.206567 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(cc463141914a6829028087b6155ecbefa9c9a8922d55484554bb8b53048ca284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.206596 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(cc463141914a6829028087b6155ecbefa9c9a8922d55484554bb8b53048ca284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.206658 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-27lzx_openshift-operators(96d98210-f390-413e-8fe4-96ec610d2071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-27lzx_openshift-operators(96d98210-f390-413e-8fe4-96ec610d2071)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(cc463141914a6829028087b6155ecbefa9c9a8922d55484554bb8b53048ca284): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.218937 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f834cfd3-fa41-4790-92c1-3b80d98241af-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.219049 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/f834cfd3-fa41-4790-92c1-3b80d98241af-kube-api-access-d5l2l\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.220101 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f834cfd3-fa41-4790-92c1-3b80d98241af-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.254919 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/f834cfd3-fa41-4790-92c1-3b80d98241af-kube-api-access-d5l2l\") pod \"perses-operator-5bf474d74f-9bgtx\" (UID: \"f834cfd3-fa41-4790-92c1-3b80d98241af\") " pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.289464 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: I0105 20:16:49.321862 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"a2f85a900549e788a63bb52fb7ff43b657b27030aa3be0e337d66295961d96dc"} Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.330537 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(f422e9fde85d1a3d8c835f3893705effeaecfef389f9d448d81f50c5c7ddd7a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.330604 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(f422e9fde85d1a3d8c835f3893705effeaecfef389f9d448d81f50c5c7ddd7a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.330629 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(f422e9fde85d1a3d8c835f3893705effeaecfef389f9d448d81f50c5c7ddd7a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:49 crc kubenswrapper[4754]: E0105 20:16:49.330671 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9bgtx_openshift-operators(f834cfd3-fa41-4790-92c1-3b80d98241af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9bgtx_openshift-operators(f834cfd3-fa41-4790-92c1-3b80d98241af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(f422e9fde85d1a3d8c835f3893705effeaecfef389f9d448d81f50c5c7ddd7a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" podUID="f834cfd3-fa41-4790-92c1-3b80d98241af" Jan 05 20:16:52 crc kubenswrapper[4754]: I0105 20:16:52.816593 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:52 crc kubenswrapper[4754]: I0105 20:16:52.860832 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.059216 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.375743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" event={"ID":"872ccf21-f97e-4899-bc61-3531536b8afb","Type":"ContainerStarted","Data":"c3e55c96183cceabc167eba8219298b2aff863ad514c5a0bf5d04e841b9f8c42"} Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.376224 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.376244 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.408424 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.408614 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9bgtx"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.408684 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.408727 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.409974 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.410273 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.426601 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.426755 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.427429 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.437990 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" podStartSLOduration=11.437966993 podStartE2EDuration="11.437966993s" podCreationTimestamp="2026-01-05 20:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:16:53.420657501 +0000 UTC m=+700.129841375" watchObservedRunningTime="2026-01-05 20:16:53.437966993 +0000 UTC m=+700.147150867" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.456931 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-27lzx"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.457066 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.457553 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.459013 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd"] Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.459101 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.459335 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:53 crc kubenswrapper[4754]: I0105 20:16:53.461709 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.489426 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(0ac5ef0984254bdc84fb469a64b62d175527987952dba65d2dbb69e7b5d74ee4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.489523 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(0ac5ef0984254bdc84fb469a64b62d175527987952dba65d2dbb69e7b5d74ee4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.489555 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(0ac5ef0984254bdc84fb469a64b62d175527987952dba65d2dbb69e7b5d74ee4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.489620 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators(d67e65cb-cb5a-4721-ba27-3b40ce273ee5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators(d67e65cb-cb5a-4721-ba27-3b40ce273ee5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-2v5pd_openshift-operators_d67e65cb-cb5a-4721-ba27-3b40ce273ee5_0(0ac5ef0984254bdc84fb469a64b62d175527987952dba65d2dbb69e7b5d74ee4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" podUID="d67e65cb-cb5a-4721-ba27-3b40ce273ee5" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.522194 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(9dd92d4b59660fd95561c6c84be17a97a95a7f5b57869346594b9334afa77cb7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.522336 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(9dd92d4b59660fd95561c6c84be17a97a95a7f5b57869346594b9334afa77cb7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.522372 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(9dd92d4b59660fd95561c6c84be17a97a95a7f5b57869346594b9334afa77cb7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.522503 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9bgtx_openshift-operators(f834cfd3-fa41-4790-92c1-3b80d98241af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9bgtx_openshift-operators(f834cfd3-fa41-4790-92c1-3b80d98241af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9bgtx_openshift-operators_f834cfd3-fa41-4790-92c1-3b80d98241af_0(9dd92d4b59660fd95561c6c84be17a97a95a7f5b57869346594b9334afa77cb7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" podUID="f834cfd3-fa41-4790-92c1-3b80d98241af" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.528423 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(f2495cda15e7b99c9ecc14c4bdff0e3c84e83d5e3d736d2d9af209735098d665): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.528499 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(f2495cda15e7b99c9ecc14c4bdff0e3c84e83d5e3d736d2d9af209735098d665): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.528527 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(f2495cda15e7b99c9ecc14c4bdff0e3c84e83d5e3d736d2d9af209735098d665): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.528585 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators(29717236-39b9-4568-8df9-028b84d46be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators(29717236-39b9-4568-8df9-028b84d46be8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_openshift-operators_29717236-39b9-4568-8df9-028b84d46be8_0(f2495cda15e7b99c9ecc14c4bdff0e3c84e83d5e3d736d2d9af209735098d665): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" podUID="29717236-39b9-4568-8df9-028b84d46be8" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.544536 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(ecfe9e05ee502881ab96f3ed397a679faea92401b9de3c8021a65ec21970967d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.544604 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(ecfe9e05ee502881ab96f3ed397a679faea92401b9de3c8021a65ec21970967d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.544630 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(ecfe9e05ee502881ab96f3ed397a679faea92401b9de3c8021a65ec21970967d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.544689 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators(a56084cf-6295-48a4-baec-a0fe4f306658)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators(a56084cf-6295-48a4-baec-a0fe4f306658)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_openshift-operators_a56084cf-6295-48a4-baec-a0fe4f306658_0(ecfe9e05ee502881ab96f3ed397a679faea92401b9de3c8021a65ec21970967d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" podUID="a56084cf-6295-48a4-baec-a0fe4f306658" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.549911 4754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(47ce94f2b350609729939cadfeff08af29fae5be8074d292452b615bbbf4385b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.549960 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(47ce94f2b350609729939cadfeff08af29fae5be8074d292452b615bbbf4385b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.549977 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(47ce94f2b350609729939cadfeff08af29fae5be8074d292452b615bbbf4385b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:16:53 crc kubenswrapper[4754]: E0105 20:16:53.550010 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-27lzx_openshift-operators(96d98210-f390-413e-8fe4-96ec610d2071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-27lzx_openshift-operators(96d98210-f390-413e-8fe4-96ec610d2071)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-27lzx_openshift-operators_96d98210-f390-413e-8fe4-96ec610d2071_0(47ce94f2b350609729939cadfeff08af29fae5be8074d292452b615bbbf4385b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" Jan 05 20:16:54 crc kubenswrapper[4754]: I0105 20:16:54.381273 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjpgv" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="registry-server" containerID="cri-o://8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54" gracePeriod=2 Jan 05 20:16:54 crc kubenswrapper[4754]: I0105 20:16:54.381408 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:54 crc kubenswrapper[4754]: I0105 20:16:54.430394 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.103403 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.216886 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content\") pod \"a3388bf9-4056-4946-a696-c357f23ab2e8\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.217642 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkkbs\" (UniqueName: \"kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs\") pod \"a3388bf9-4056-4946-a696-c357f23ab2e8\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.217784 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities\") pod \"a3388bf9-4056-4946-a696-c357f23ab2e8\" (UID: \"a3388bf9-4056-4946-a696-c357f23ab2e8\") " Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.219447 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities" (OuterVolumeSpecName: "utilities") pod "a3388bf9-4056-4946-a696-c357f23ab2e8" (UID: "a3388bf9-4056-4946-a696-c357f23ab2e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.225786 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs" (OuterVolumeSpecName: "kube-api-access-rkkbs") pod "a3388bf9-4056-4946-a696-c357f23ab2e8" (UID: "a3388bf9-4056-4946-a696-c357f23ab2e8"). InnerVolumeSpecName "kube-api-access-rkkbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.319738 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkkbs\" (UniqueName: \"kubernetes.io/projected/a3388bf9-4056-4946-a696-c357f23ab2e8-kube-api-access-rkkbs\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.319787 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.344688 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3388bf9-4056-4946-a696-c357f23ab2e8" (UID: "a3388bf9-4056-4946-a696-c357f23ab2e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.389575 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerID="8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54" exitCode=0 Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.389648 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjpgv" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.389821 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerDied","Data":"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54"} Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.389856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjpgv" event={"ID":"a3388bf9-4056-4946-a696-c357f23ab2e8","Type":"ContainerDied","Data":"66699a19897d3d4d5903b072837e4ff16f03038594acb1ccb2260f85d0219bcf"} Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.389900 4754 scope.go:117] "RemoveContainer" containerID="8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.411099 4754 scope.go:117] "RemoveContainer" containerID="8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.422660 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3388bf9-4056-4946-a696-c357f23ab2e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.427030 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.435165 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjpgv"] Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.445589 4754 scope.go:117] "RemoveContainer" containerID="543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.470328 4754 scope.go:117] "RemoveContainer" containerID="8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54" Jan 05 20:16:55 crc kubenswrapper[4754]: E0105 20:16:55.471005 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54\": container with ID starting with 8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54 not found: ID does not exist" containerID="8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.471064 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54"} err="failed to get container status \"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54\": rpc error: code = NotFound desc = could not find container \"8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54\": container with ID starting with 8e94381c1eabf004b8048a3311762f7e6abb6aa056d832bee7a718090aaeea54 not found: ID does not exist" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.471105 4754 scope.go:117] "RemoveContainer" containerID="8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a" Jan 05 20:16:55 crc kubenswrapper[4754]: E0105 20:16:55.471722 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a\": container with ID starting with 8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a not found: ID does not exist" containerID="8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.471767 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a"} err="failed to get container status \"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a\": rpc error: code = NotFound desc = could not find container \"8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a\": container with ID starting with 8a469ac649e6744614be77b011375bcde87fb239341e0894161f3cb08a02a50a not found: ID does not exist" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.471822 4754 scope.go:117] "RemoveContainer" containerID="543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53" Jan 05 20:16:55 crc kubenswrapper[4754]: E0105 20:16:55.472124 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53\": container with ID starting with 543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53 not found: ID does not exist" containerID="543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.472181 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53"} err="failed to get container status \"543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53\": rpc error: code = NotFound desc = could not find container \"543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53\": container with ID starting with 543fa4772fb0af8a85a525436427911071980095db0a5407b22297e5972dfb53 not found: ID does not exist" Jan 05 20:16:55 crc kubenswrapper[4754]: I0105 20:16:55.602450 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" path="/var/lib/kubelet/pods/a3388bf9-4056-4946-a696-c357f23ab2e8/volumes" Jan 05 20:17:04 crc kubenswrapper[4754]: I0105 20:17:04.588160 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:17:04 crc kubenswrapper[4754]: I0105 20:17:04.589455 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:17:04 crc kubenswrapper[4754]: I0105 20:17:04.822152 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9bgtx"] Jan 05 20:17:04 crc kubenswrapper[4754]: W0105 20:17:04.829481 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf834cfd3_fa41_4790_92c1_3b80d98241af.slice/crio-a2d8713470d7c5b9cc3952d61f62d9ff55d727eb1743bce636406edb77327d0d WatchSource:0}: Error finding container a2d8713470d7c5b9cc3952d61f62d9ff55d727eb1743bce636406edb77327d0d: Status 404 returned error can't find the container with id a2d8713470d7c5b9cc3952d61f62d9ff55d727eb1743bce636406edb77327d0d Jan 05 20:17:05 crc kubenswrapper[4754]: I0105 20:17:05.463773 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" event={"ID":"f834cfd3-fa41-4790-92c1-3b80d98241af","Type":"ContainerStarted","Data":"a2d8713470d7c5b9cc3952d61f62d9ff55d727eb1743bce636406edb77327d0d"} Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.588206 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.588455 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.588423 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.588676 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.588962 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.589352 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" Jan 05 20:17:06 crc kubenswrapper[4754]: I0105 20:17:06.931113 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd"] Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.050969 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf"] Jan 05 20:17:07 crc kubenswrapper[4754]: W0105 20:17:07.060059 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29717236_39b9_4568_8df9_028b84d46be8.slice/crio-036f23db3c3a48405d3f9f3c01063799236902538a6d4b94969e7e8511ed3073 WatchSource:0}: Error finding container 036f23db3c3a48405d3f9f3c01063799236902538a6d4b94969e7e8511ed3073: Status 404 returned error can't find the container with id 036f23db3c3a48405d3f9f3c01063799236902538a6d4b94969e7e8511ed3073 Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.061340 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd"] Jan 05 20:17:07 crc kubenswrapper[4754]: W0105 20:17:07.065819 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda56084cf_6295_48a4_baec_a0fe4f306658.slice/crio-633db19a396b5a57941da3d67134996fb08117876b1bd3cd1e1e3fbd997532aa WatchSource:0}: Error finding container 633db19a396b5a57941da3d67134996fb08117876b1bd3cd1e1e3fbd997532aa: Status 404 returned error can't find the container with id 633db19a396b5a57941da3d67134996fb08117876b1bd3cd1e1e3fbd997532aa Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.478252 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" event={"ID":"a56084cf-6295-48a4-baec-a0fe4f306658","Type":"ContainerStarted","Data":"633db19a396b5a57941da3d67134996fb08117876b1bd3cd1e1e3fbd997532aa"} Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.480506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" event={"ID":"d67e65cb-cb5a-4721-ba27-3b40ce273ee5","Type":"ContainerStarted","Data":"2b0f7812adc675055993ac1cc54b0d9b433624998f01a6d2c15a37f9917be308"} Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.481641 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" event={"ID":"29717236-39b9-4568-8df9-028b84d46be8","Type":"ContainerStarted","Data":"036f23db3c3a48405d3f9f3c01063799236902538a6d4b94969e7e8511ed3073"} Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.588455 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:17:07 crc kubenswrapper[4754]: I0105 20:17:07.589013 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:17:10 crc kubenswrapper[4754]: I0105 20:17:10.083627 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-27lzx"] Jan 05 20:17:10 crc kubenswrapper[4754]: I0105 20:17:10.502078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" event={"ID":"96d98210-f390-413e-8fe4-96ec610d2071","Type":"ContainerStarted","Data":"499a85c619499f9b8db4ea8faf262a799d37518bb52060e4c3060cf236b3f122"} Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.557012 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" event={"ID":"29717236-39b9-4568-8df9-028b84d46be8","Type":"ContainerStarted","Data":"940f0e00e0a8398bc9b195ab2979cc743102eb9fe8b91ba5cbddf17fdcec1ad8"} Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.559602 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" event={"ID":"a56084cf-6295-48a4-baec-a0fe4f306658","Type":"ContainerStarted","Data":"de697c0f744bd1f7f9774c94e788af77b2fc24cae22cc2fe62ae53fae855bc79"} Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.562526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" event={"ID":"f834cfd3-fa41-4790-92c1-3b80d98241af","Type":"ContainerStarted","Data":"d085eec9b0a9e7d5e3d4d3fd171f24f041c36f6dc6c65d9ff41bed7d90cf2124"} Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.562672 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.565606 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" event={"ID":"d67e65cb-cb5a-4721-ba27-3b40ce273ee5","Type":"ContainerStarted","Data":"b976401a5e8eecebcc7ea3ee5bf7a7a9b0f2f7069579dbc369f302e0b9e17bd1"} Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.615769 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2v5pd" podStartSLOduration=21.179996976 podStartE2EDuration="25.615743095s" podCreationTimestamp="2026-01-05 20:16:47 +0000 UTC" firstStartedPulling="2026-01-05 20:17:06.958453668 +0000 UTC m=+713.667637542" lastFinishedPulling="2026-01-05 20:17:11.394199777 +0000 UTC m=+718.103383661" observedRunningTime="2026-01-05 20:17:12.613369913 +0000 UTC m=+719.322553787" watchObservedRunningTime="2026-01-05 20:17:12.615743095 +0000 UTC m=+719.324926969" Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.617080 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf" podStartSLOduration=20.278168263 podStartE2EDuration="24.61707422s" podCreationTimestamp="2026-01-05 20:16:48 +0000 UTC" firstStartedPulling="2026-01-05 20:17:07.062650783 +0000 UTC m=+713.771834657" lastFinishedPulling="2026-01-05 20:17:11.40155674 +0000 UTC m=+718.110740614" observedRunningTime="2026-01-05 20:17:12.578770095 +0000 UTC m=+719.287954049" watchObservedRunningTime="2026-01-05 20:17:12.61707422 +0000 UTC m=+719.326258094" Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.649501 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" podStartSLOduration=18.119234783 podStartE2EDuration="24.64948077s" podCreationTimestamp="2026-01-05 20:16:48 +0000 UTC" firstStartedPulling="2026-01-05 20:17:04.834362923 +0000 UTC m=+711.543546847" lastFinishedPulling="2026-01-05 20:17:11.36460894 +0000 UTC m=+718.073792834" observedRunningTime="2026-01-05 20:17:12.642176849 +0000 UTC m=+719.351360723" watchObservedRunningTime="2026-01-05 20:17:12.64948077 +0000 UTC m=+719.358664644" Jan 05 20:17:12 crc kubenswrapper[4754]: I0105 20:17:12.682091 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd" podStartSLOduration=20.354600619 podStartE2EDuration="24.682058395s" podCreationTimestamp="2026-01-05 20:16:48 +0000 UTC" firstStartedPulling="2026-01-05 20:17:07.068872616 +0000 UTC m=+713.778056490" lastFinishedPulling="2026-01-05 20:17:11.396330382 +0000 UTC m=+718.105514266" observedRunningTime="2026-01-05 20:17:12.673898391 +0000 UTC m=+719.383082295" watchObservedRunningTime="2026-01-05 20:17:12.682058395 +0000 UTC m=+719.391242269" Jan 05 20:17:13 crc kubenswrapper[4754]: I0105 20:17:13.252360 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tm7l6" Jan 05 20:17:17 crc kubenswrapper[4754]: I0105 20:17:17.612759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" event={"ID":"96d98210-f390-413e-8fe4-96ec610d2071","Type":"ContainerStarted","Data":"ca900783c9511939c7dc81b3bcb4f6a8a3f0c9c6adcd1c0a555ffd079e0c4188"} Jan 05 20:17:17 crc kubenswrapper[4754]: I0105 20:17:17.613468 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:17:17 crc kubenswrapper[4754]: I0105 20:17:17.615565 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-27lzx container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": dial tcp 10.217.0.16:8081: connect: connection refused" start-of-body= Jan 05 20:17:17 crc kubenswrapper[4754]: I0105 20:17:17.615699 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": dial tcp 10.217.0.16:8081: connect: connection refused" Jan 05 20:17:17 crc kubenswrapper[4754]: I0105 20:17:17.651457 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podStartSLOduration=22.982385891 podStartE2EDuration="29.651434461s" podCreationTimestamp="2026-01-05 20:16:48 +0000 UTC" firstStartedPulling="2026-01-05 20:17:10.483634907 +0000 UTC m=+717.192818791" lastFinishedPulling="2026-01-05 20:17:17.152683447 +0000 UTC m=+723.861867361" observedRunningTime="2026-01-05 20:17:17.644332375 +0000 UTC m=+724.353516249" watchObservedRunningTime="2026-01-05 20:17:17.651434461 +0000 UTC m=+724.360618335" Jan 05 20:17:18 crc kubenswrapper[4754]: I0105 20:17:18.109500 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:17:18 crc kubenswrapper[4754]: I0105 20:17:18.109566 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:17:18 crc kubenswrapper[4754]: I0105 20:17:18.651445 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" Jan 05 20:17:19 crc kubenswrapper[4754]: I0105 20:17:19.293780 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.776924 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h"] Jan 05 20:17:29 crc kubenswrapper[4754]: E0105 20:17:29.777837 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="registry-server" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.777853 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="registry-server" Jan 05 20:17:29 crc kubenswrapper[4754]: E0105 20:17:29.777866 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="extract-content" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.777875 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="extract-content" Jan 05 20:17:29 crc kubenswrapper[4754]: E0105 20:17:29.777886 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="extract-utilities" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.777895 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="extract-utilities" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.778049 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3388bf9-4056-4946-a696-c357f23ab2e8" containerName="registry-server" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.778675 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.780533 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rcdqv" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.782218 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.783384 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.791882 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h"] Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.798434 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-nqnfz"] Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.799198 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-nqnfz" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.801602 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cfnff" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.819025 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-nqnfz"] Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.838457 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k59mk"] Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.839448 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.843715 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-q6k9j" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.847132 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k59mk"] Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.914373 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgjs\" (UniqueName: \"kubernetes.io/projected/616c6f2a-f08e-450d-9ff1-cad7a75e25b2-kube-api-access-rfgjs\") pod \"cert-manager-webhook-687f57d79b-k59mk\" (UID: \"616c6f2a-f08e-450d-9ff1-cad7a75e25b2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.914452 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595fm\" (UniqueName: \"kubernetes.io/projected/ff71a71f-4340-4712-a3ec-606c3bc81013-kube-api-access-595fm\") pod \"cert-manager-cainjector-cf98fcc89-jjd7h\" (UID: \"ff71a71f-4340-4712-a3ec-606c3bc81013\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" Jan 05 20:17:29 crc kubenswrapper[4754]: I0105 20:17:29.914484 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwjp\" (UniqueName: \"kubernetes.io/projected/52d348a3-de69-45cc-a625-18fdac495103-kube-api-access-jlwjp\") pod \"cert-manager-858654f9db-nqnfz\" (UID: \"52d348a3-de69-45cc-a625-18fdac495103\") " pod="cert-manager/cert-manager-858654f9db-nqnfz" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.016438 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595fm\" (UniqueName: \"kubernetes.io/projected/ff71a71f-4340-4712-a3ec-606c3bc81013-kube-api-access-595fm\") pod \"cert-manager-cainjector-cf98fcc89-jjd7h\" (UID: \"ff71a71f-4340-4712-a3ec-606c3bc81013\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.016529 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwjp\" (UniqueName: \"kubernetes.io/projected/52d348a3-de69-45cc-a625-18fdac495103-kube-api-access-jlwjp\") pod \"cert-manager-858654f9db-nqnfz\" (UID: \"52d348a3-de69-45cc-a625-18fdac495103\") " pod="cert-manager/cert-manager-858654f9db-nqnfz" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.016670 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgjs\" (UniqueName: \"kubernetes.io/projected/616c6f2a-f08e-450d-9ff1-cad7a75e25b2-kube-api-access-rfgjs\") pod \"cert-manager-webhook-687f57d79b-k59mk\" (UID: \"616c6f2a-f08e-450d-9ff1-cad7a75e25b2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.037832 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595fm\" (UniqueName: \"kubernetes.io/projected/ff71a71f-4340-4712-a3ec-606c3bc81013-kube-api-access-595fm\") pod \"cert-manager-cainjector-cf98fcc89-jjd7h\" (UID: \"ff71a71f-4340-4712-a3ec-606c3bc81013\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.042489 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgjs\" (UniqueName: \"kubernetes.io/projected/616c6f2a-f08e-450d-9ff1-cad7a75e25b2-kube-api-access-rfgjs\") pod \"cert-manager-webhook-687f57d79b-k59mk\" (UID: \"616c6f2a-f08e-450d-9ff1-cad7a75e25b2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.049016 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwjp\" (UniqueName: \"kubernetes.io/projected/52d348a3-de69-45cc-a625-18fdac495103-kube-api-access-jlwjp\") pod \"cert-manager-858654f9db-nqnfz\" (UID: \"52d348a3-de69-45cc-a625-18fdac495103\") " pod="cert-manager/cert-manager-858654f9db-nqnfz" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.104781 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.124167 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-nqnfz" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.152943 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.519161 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h"] Jan 05 20:17:30 crc kubenswrapper[4754]: W0105 20:17:30.521989 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff71a71f_4340_4712_a3ec_606c3bc81013.slice/crio-4b418262e3feffb35c91096f00ac3f5321105e36c58eec7965afb17ecd4117dc WatchSource:0}: Error finding container 4b418262e3feffb35c91096f00ac3f5321105e36c58eec7965afb17ecd4117dc: Status 404 returned error can't find the container with id 4b418262e3feffb35c91096f00ac3f5321105e36c58eec7965afb17ecd4117dc Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.720198 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" event={"ID":"ff71a71f-4340-4712-a3ec-606c3bc81013","Type":"ContainerStarted","Data":"4b418262e3feffb35c91096f00ac3f5321105e36c58eec7965afb17ecd4117dc"} Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.788494 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-nqnfz"] Jan 05 20:17:30 crc kubenswrapper[4754]: W0105 20:17:30.794650 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d348a3_de69_45cc_a625_18fdac495103.slice/crio-d5ef3204df5c598da6f9975e515b9f026dbfef224edad6b73346d0aceb4970a9 WatchSource:0}: Error finding container d5ef3204df5c598da6f9975e515b9f026dbfef224edad6b73346d0aceb4970a9: Status 404 returned error can't find the container with id d5ef3204df5c598da6f9975e515b9f026dbfef224edad6b73346d0aceb4970a9 Jan 05 20:17:30 crc kubenswrapper[4754]: I0105 20:17:30.810891 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k59mk"] Jan 05 20:17:30 crc kubenswrapper[4754]: W0105 20:17:30.813845 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616c6f2a_f08e_450d_9ff1_cad7a75e25b2.slice/crio-7479fe62cf176c512f0d37d44c05f23d7203af166a003844d0d1a8205faf63dc WatchSource:0}: Error finding container 7479fe62cf176c512f0d37d44c05f23d7203af166a003844d0d1a8205faf63dc: Status 404 returned error can't find the container with id 7479fe62cf176c512f0d37d44c05f23d7203af166a003844d0d1a8205faf63dc Jan 05 20:17:31 crc kubenswrapper[4754]: I0105 20:17:31.735587 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" event={"ID":"616c6f2a-f08e-450d-9ff1-cad7a75e25b2","Type":"ContainerStarted","Data":"7479fe62cf176c512f0d37d44c05f23d7203af166a003844d0d1a8205faf63dc"} Jan 05 20:17:31 crc kubenswrapper[4754]: I0105 20:17:31.737833 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-nqnfz" event={"ID":"52d348a3-de69-45cc-a625-18fdac495103","Type":"ContainerStarted","Data":"d5ef3204df5c598da6f9975e515b9f026dbfef224edad6b73346d0aceb4970a9"} Jan 05 20:17:34 crc kubenswrapper[4754]: I0105 20:17:34.791890 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" event={"ID":"ff71a71f-4340-4712-a3ec-606c3bc81013","Type":"ContainerStarted","Data":"bc4ca7a69721085b45fd50124f209659b2fb539d6368c91981fa6c51b76f5a16"} Jan 05 20:17:34 crc kubenswrapper[4754]: I0105 20:17:34.810135 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jjd7h" podStartSLOduration=2.342575763 podStartE2EDuration="5.810112217s" podCreationTimestamp="2026-01-05 20:17:29 +0000 UTC" firstStartedPulling="2026-01-05 20:17:30.52411663 +0000 UTC m=+737.233300504" lastFinishedPulling="2026-01-05 20:17:33.991653084 +0000 UTC m=+740.700836958" observedRunningTime="2026-01-05 20:17:34.808851974 +0000 UTC m=+741.518035848" watchObservedRunningTime="2026-01-05 20:17:34.810112217 +0000 UTC m=+741.519296091" Jan 05 20:17:36 crc kubenswrapper[4754]: I0105 20:17:36.806544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" event={"ID":"616c6f2a-f08e-450d-9ff1-cad7a75e25b2","Type":"ContainerStarted","Data":"dfceb1869d650eec017e6c7fdb99cd75be8cffa19185b02ffdbae6ae19307929"} Jan 05 20:17:36 crc kubenswrapper[4754]: I0105 20:17:36.807051 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:36 crc kubenswrapper[4754]: I0105 20:17:36.808968 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-nqnfz" event={"ID":"52d348a3-de69-45cc-a625-18fdac495103","Type":"ContainerStarted","Data":"cc58e144429b4e66567abb98413822f9c5fddcb1db7855daf0793d9e07b6cab8"} Jan 05 20:17:36 crc kubenswrapper[4754]: I0105 20:17:36.823372 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" podStartSLOduration=2.374005587 podStartE2EDuration="7.823347076s" podCreationTimestamp="2026-01-05 20:17:29 +0000 UTC" firstStartedPulling="2026-01-05 20:17:30.81761165 +0000 UTC m=+737.526795554" lastFinishedPulling="2026-01-05 20:17:36.266953169 +0000 UTC m=+742.976137043" observedRunningTime="2026-01-05 20:17:36.819801063 +0000 UTC m=+743.528984937" watchObservedRunningTime="2026-01-05 20:17:36.823347076 +0000 UTC m=+743.532530970" Jan 05 20:17:36 crc kubenswrapper[4754]: I0105 20:17:36.844205 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-nqnfz" podStartSLOduration=2.373525355 podStartE2EDuration="7.844177493s" podCreationTimestamp="2026-01-05 20:17:29 +0000 UTC" firstStartedPulling="2026-01-05 20:17:30.800542532 +0000 UTC m=+737.509726446" lastFinishedPulling="2026-01-05 20:17:36.27119471 +0000 UTC m=+742.980378584" observedRunningTime="2026-01-05 20:17:36.843246958 +0000 UTC m=+743.552430822" watchObservedRunningTime="2026-01-05 20:17:36.844177493 +0000 UTC m=+743.553361387" Jan 05 20:17:45 crc kubenswrapper[4754]: I0105 20:17:45.158341 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" Jan 05 20:17:48 crc kubenswrapper[4754]: I0105 20:17:48.110042 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:17:48 crc kubenswrapper[4754]: I0105 20:17:48.110475 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:17:48 crc kubenswrapper[4754]: I0105 20:17:48.110545 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:17:48 crc kubenswrapper[4754]: I0105 20:17:48.111358 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:17:48 crc kubenswrapper[4754]: I0105 20:17:48.111446 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a" gracePeriod=600 Jan 05 20:17:49 crc kubenswrapper[4754]: I0105 20:17:49.956766 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a" exitCode=0 Jan 05 20:17:49 crc kubenswrapper[4754]: I0105 20:17:49.956901 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a"} Jan 05 20:17:49 crc kubenswrapper[4754]: I0105 20:17:49.957381 4754 scope.go:117] "RemoveContainer" containerID="195c8d8797ae7467acd40c604546092b92a46f48a1381375300062c8b853d320" Jan 05 20:17:50 crc kubenswrapper[4754]: I0105 20:17:50.969580 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760"} Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.759750 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf"] Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.761590 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.766320 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.778589 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf"] Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.789173 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.789322 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.789511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxr69\" (UniqueName: \"kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.891412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.891503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.891576 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxr69\" (UniqueName: \"kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.892183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.892208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:16 crc kubenswrapper[4754]: I0105 20:18:16.922210 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxr69\" (UniqueName: \"kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.079989 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.120691 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw"] Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.122333 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.145843 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw"] Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.298382 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.298698 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crlh8\" (UniqueName: \"kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.298881 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.311367 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf"] Jan 05 20:18:17 crc kubenswrapper[4754]: W0105 20:18:17.312121 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56f1313_f154_41eb_b7ce_95aba2d55d7d.slice/crio-82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9 WatchSource:0}: Error finding container 82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9: Status 404 returned error can't find the container with id 82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9 Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.400405 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.400529 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.400565 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crlh8\" (UniqueName: \"kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.401193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.401267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.423202 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crlh8\" (UniqueName: \"kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.460838 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:17 crc kubenswrapper[4754]: I0105 20:18:17.710410 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw"] Jan 05 20:18:17 crc kubenswrapper[4754]: W0105 20:18:17.718733 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025b0b79_3f3c_45ec_bfda_fc89123813af.slice/crio-57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34 WatchSource:0}: Error finding container 57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34: Status 404 returned error can't find the container with id 57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34 Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.186075 4754 generic.go:334] "Generic (PLEG): container finished" podID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerID="323ba3e10ca385805178f7c3e29692fa3d6487cd55c4aec78d7b87a6540eaaf8" exitCode=0 Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.187441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" event={"ID":"025b0b79-3f3c-45ec-bfda-fc89123813af","Type":"ContainerDied","Data":"323ba3e10ca385805178f7c3e29692fa3d6487cd55c4aec78d7b87a6540eaaf8"} Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.187507 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" event={"ID":"025b0b79-3f3c-45ec-bfda-fc89123813af","Type":"ContainerStarted","Data":"57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34"} Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.190067 4754 generic.go:334] "Generic (PLEG): container finished" podID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerID="379c910b94f090695d9f1eb08092f356e9815ed5d0f0d8fbe8515c9db0a5e8dc" exitCode=0 Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.190115 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" event={"ID":"e56f1313-f154-41eb-b7ce-95aba2d55d7d","Type":"ContainerDied","Data":"379c910b94f090695d9f1eb08092f356e9815ed5d0f0d8fbe8515c9db0a5e8dc"} Jan 05 20:18:18 crc kubenswrapper[4754]: I0105 20:18:18.190143 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" event={"ID":"e56f1313-f154-41eb-b7ce-95aba2d55d7d","Type":"ContainerStarted","Data":"82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9"} Jan 05 20:18:20 crc kubenswrapper[4754]: I0105 20:18:20.209121 4754 generic.go:334] "Generic (PLEG): container finished" podID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerID="23c9fe92c8d4cd99689a16308b346683e26fdb5e0fbc025be11f7deae51b5e7c" exitCode=0 Jan 05 20:18:20 crc kubenswrapper[4754]: I0105 20:18:20.209168 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" event={"ID":"025b0b79-3f3c-45ec-bfda-fc89123813af","Type":"ContainerDied","Data":"23c9fe92c8d4cd99689a16308b346683e26fdb5e0fbc025be11f7deae51b5e7c"} Jan 05 20:18:20 crc kubenswrapper[4754]: I0105 20:18:20.213550 4754 generic.go:334] "Generic (PLEG): container finished" podID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerID="f997033e950b747cd0abd05b34bf5e28b3c3f0301b5f2bda53fbde986daf7580" exitCode=0 Jan 05 20:18:20 crc kubenswrapper[4754]: I0105 20:18:20.213643 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" event={"ID":"e56f1313-f154-41eb-b7ce-95aba2d55d7d","Type":"ContainerDied","Data":"f997033e950b747cd0abd05b34bf5e28b3c3f0301b5f2bda53fbde986daf7580"} Jan 05 20:18:21 crc kubenswrapper[4754]: I0105 20:18:21.230220 4754 generic.go:334] "Generic (PLEG): container finished" podID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerID="c9a1c741a8ab62ff022b2e4c4258d77cc69761cbd8d0b29029d57290a46aa871" exitCode=0 Jan 05 20:18:21 crc kubenswrapper[4754]: I0105 20:18:21.230698 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" event={"ID":"025b0b79-3f3c-45ec-bfda-fc89123813af","Type":"ContainerDied","Data":"c9a1c741a8ab62ff022b2e4c4258d77cc69761cbd8d0b29029d57290a46aa871"} Jan 05 20:18:21 crc kubenswrapper[4754]: I0105 20:18:21.236692 4754 generic.go:334] "Generic (PLEG): container finished" podID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerID="d7bb84475abd73096e00540b6d2e7904d7cbf83f07479a3f1a07fb03978ac4ac" exitCode=0 Jan 05 20:18:21 crc kubenswrapper[4754]: I0105 20:18:21.236750 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" event={"ID":"e56f1313-f154-41eb-b7ce-95aba2d55d7d","Type":"ContainerDied","Data":"d7bb84475abd73096e00540b6d2e7904d7cbf83f07479a3f1a07fb03978ac4ac"} Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.609529 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.614176 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.700280 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle\") pod \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.700425 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxr69\" (UniqueName: \"kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69\") pod \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.700610 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util\") pod \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\" (UID: \"e56f1313-f154-41eb-b7ce-95aba2d55d7d\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.701672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle" (OuterVolumeSpecName: "bundle") pod "e56f1313-f154-41eb-b7ce-95aba2d55d7d" (UID: "e56f1313-f154-41eb-b7ce-95aba2d55d7d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.710625 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69" (OuterVolumeSpecName: "kube-api-access-wxr69") pod "e56f1313-f154-41eb-b7ce-95aba2d55d7d" (UID: "e56f1313-f154-41eb-b7ce-95aba2d55d7d"). InnerVolumeSpecName "kube-api-access-wxr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.735192 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util" (OuterVolumeSpecName: "util") pod "e56f1313-f154-41eb-b7ce-95aba2d55d7d" (UID: "e56f1313-f154-41eb-b7ce-95aba2d55d7d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.802841 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crlh8\" (UniqueName: \"kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8\") pod \"025b0b79-3f3c-45ec-bfda-fc89123813af\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.802982 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle\") pod \"025b0b79-3f3c-45ec-bfda-fc89123813af\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.803231 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util\") pod \"025b0b79-3f3c-45ec-bfda-fc89123813af\" (UID: \"025b0b79-3f3c-45ec-bfda-fc89123813af\") " Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.803824 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.803859 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxr69\" (UniqueName: \"kubernetes.io/projected/e56f1313-f154-41eb-b7ce-95aba2d55d7d-kube-api-access-wxr69\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.803883 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e56f1313-f154-41eb-b7ce-95aba2d55d7d-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.804467 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle" (OuterVolumeSpecName: "bundle") pod "025b0b79-3f3c-45ec-bfda-fc89123813af" (UID: "025b0b79-3f3c-45ec-bfda-fc89123813af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.807928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8" (OuterVolumeSpecName: "kube-api-access-crlh8") pod "025b0b79-3f3c-45ec-bfda-fc89123813af" (UID: "025b0b79-3f3c-45ec-bfda-fc89123813af"). InnerVolumeSpecName "kube-api-access-crlh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.826833 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util" (OuterVolumeSpecName: "util") pod "025b0b79-3f3c-45ec-bfda-fc89123813af" (UID: "025b0b79-3f3c-45ec-bfda-fc89123813af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.906079 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.906147 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crlh8\" (UniqueName: \"kubernetes.io/projected/025b0b79-3f3c-45ec-bfda-fc89123813af-kube-api-access-crlh8\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:22 crc kubenswrapper[4754]: I0105 20:18:22.906170 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/025b0b79-3f3c-45ec-bfda-fc89123813af-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.260010 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" event={"ID":"025b0b79-3f3c-45ec-bfda-fc89123813af","Type":"ContainerDied","Data":"57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34"} Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.260455 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b21ee535880c12efa27530a8a6623916d2fe30f03ecfb3d82be7dd69567c34" Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.260098 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw" Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.264060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" event={"ID":"e56f1313-f154-41eb-b7ce-95aba2d55d7d","Type":"ContainerDied","Data":"82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9"} Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.264105 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf" Jan 05 20:18:23 crc kubenswrapper[4754]: I0105 20:18:23.264120 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b361e9a687413f3bbbd05e54ecc5f5d84853fcd0f208def80249454b0e61c9" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.497808 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968"] Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498598 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="pull" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498610 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="pull" Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498623 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498629 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498640 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="util" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498646 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="util" Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498660 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498665 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498675 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="util" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498680 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="util" Jan 05 20:18:32 crc kubenswrapper[4754]: E0105 20:18:32.498689 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="pull" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498694 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="pull" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498804 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56f1313-f154-41eb-b7ce-95aba2d55d7d" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.498811 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="025b0b79-3f3c-45ec-bfda-fc89123813af" containerName="extract" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.499469 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.503137 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.503424 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-td2mx" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.503993 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.504078 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.505682 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.509397 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.561949 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968"] Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.576223 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglgv\" (UniqueName: \"kubernetes.io/projected/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-kube-api-access-nglgv\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.576278 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-apiservice-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.576339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-manager-config\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.576360 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-webhook-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.576415 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.677947 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-manager-config\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.678022 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-webhook-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.678118 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.678182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nglgv\" (UniqueName: \"kubernetes.io/projected/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-kube-api-access-nglgv\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.678216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-apiservice-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.679583 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-manager-config\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.685015 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.693141 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-apiservice-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.697741 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-webhook-cert\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.699777 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nglgv\" (UniqueName: \"kubernetes.io/projected/163085a0-0b43-4d21-aefc-ec28ba9c6e3f-kube-api-access-nglgv\") pod \"loki-operator-controller-manager-5c7d94bdc4-k9968\" (UID: \"163085a0-0b43-4d21-aefc-ec28ba9c6e3f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:32 crc kubenswrapper[4754]: I0105 20:18:32.820340 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:33 crc kubenswrapper[4754]: I0105 20:18:33.392793 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968"] Jan 05 20:18:34 crc kubenswrapper[4754]: I0105 20:18:34.360814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" event={"ID":"163085a0-0b43-4d21-aefc-ec28ba9c6e3f","Type":"ContainerStarted","Data":"7a87e5f5b243ade46cfae7ac945892b57d1ccc0a88f762ed291e3ce0d32dbcca"} Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.688924 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf"] Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.690546 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.693155 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.693696 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.694717 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-4jdwl" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.711747 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf"] Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.790528 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/e58a5f7b-2ae6-44ca-a299-b99e1dc283fe-kube-api-access-6t9hf\") pod \"cluster-logging-operator-79cf69ddc8-hj2gf\" (UID: \"e58a5f7b-2ae6-44ca-a299-b99e1dc283fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.891730 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/e58a5f7b-2ae6-44ca-a299-b99e1dc283fe-kube-api-access-6t9hf\") pod \"cluster-logging-operator-79cf69ddc8-hj2gf\" (UID: \"e58a5f7b-2ae6-44ca-a299-b99e1dc283fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" Jan 05 20:18:37 crc kubenswrapper[4754]: I0105 20:18:37.928962 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/e58a5f7b-2ae6-44ca-a299-b99e1dc283fe-kube-api-access-6t9hf\") pod \"cluster-logging-operator-79cf69ddc8-hj2gf\" (UID: \"e58a5f7b-2ae6-44ca-a299-b99e1dc283fe\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" Jan 05 20:18:38 crc kubenswrapper[4754]: I0105 20:18:38.018074 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" Jan 05 20:18:39 crc kubenswrapper[4754]: I0105 20:18:39.263715 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf"] Jan 05 20:18:39 crc kubenswrapper[4754]: W0105 20:18:39.274680 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode58a5f7b_2ae6_44ca_a299_b99e1dc283fe.slice/crio-cdb3689595d4772d6a89fa65e29f758bdd30cc022efb937e28b8a62dcb82c744 WatchSource:0}: Error finding container cdb3689595d4772d6a89fa65e29f758bdd30cc022efb937e28b8a62dcb82c744: Status 404 returned error can't find the container with id cdb3689595d4772d6a89fa65e29f758bdd30cc022efb937e28b8a62dcb82c744 Jan 05 20:18:39 crc kubenswrapper[4754]: I0105 20:18:39.397917 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" event={"ID":"163085a0-0b43-4d21-aefc-ec28ba9c6e3f","Type":"ContainerStarted","Data":"10e115373b34e09eeb566f287e46ef6ec8653509471c731713d9402b9a1de224"} Jan 05 20:18:39 crc kubenswrapper[4754]: I0105 20:18:39.399128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" event={"ID":"e58a5f7b-2ae6-44ca-a299-b99e1dc283fe","Type":"ContainerStarted","Data":"cdb3689595d4772d6a89fa65e29f758bdd30cc022efb937e28b8a62dcb82c744"} Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.103755 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.105353 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.120888 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.221862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.222161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9l6\" (UniqueName: \"kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.222188 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.323573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9l6\" (UniqueName: \"kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.323916 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.323978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.324421 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.324624 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.343020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9l6\" (UniqueName: \"kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6\") pod \"certified-operators-qhnx6\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:44 crc kubenswrapper[4754]: I0105 20:18:44.427502 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.684730 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.695115 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.701467 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.790869 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8kk8\" (UniqueName: \"kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.790961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.791192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.893853 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.893972 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.894093 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8kk8\" (UniqueName: \"kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.894407 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.894465 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:48 crc kubenswrapper[4754]: I0105 20:18:48.929224 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8kk8\" (UniqueName: \"kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8\") pod \"redhat-marketplace-pb6x5\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:49 crc kubenswrapper[4754]: I0105 20:18:49.049008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:49 crc kubenswrapper[4754]: I0105 20:18:49.823041 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:18:49 crc kubenswrapper[4754]: I0105 20:18:49.957370 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.480331 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" event={"ID":"e58a5f7b-2ae6-44ca-a299-b99e1dc283fe","Type":"ContainerStarted","Data":"62b476fbbeec5ddb3b4ad67ef007e3995817a4dbb5303d6de42efbe51792c1e4"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.483587 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerID="93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733" exitCode=0 Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.483686 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerDied","Data":"93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.483719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerStarted","Data":"7a9ea4c19c3db971c935c85fe6cc5d51a0b154b4356823e8abecf525986d69ac"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.487225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" event={"ID":"163085a0-0b43-4d21-aefc-ec28ba9c6e3f","Type":"ContainerStarted","Data":"ac5470d86a84ce1445c3f13e85a85f80c731028cf640f49be53bd7978c678603"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.487462 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.489769 4754 generic.go:334] "Generic (PLEG): container finished" podID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerID="02089294e4bd9e828ffd6949fb0b72cf0fa87ede7e178a935e0df3a17733c62e" exitCode=0 Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.489802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerDied","Data":"02089294e4bd9e828ffd6949fb0b72cf0fa87ede7e178a935e0df3a17733c62e"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.489823 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerStarted","Data":"f004c69451eea8070a4fb791f176dba2d60956a49d7b0fc551acd29e40ce48fe"} Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.494165 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.515552 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-hj2gf" podStartSLOduration=3.302059377 podStartE2EDuration="13.515530567s" podCreationTimestamp="2026-01-05 20:18:37 +0000 UTC" firstStartedPulling="2026-01-05 20:18:39.278359489 +0000 UTC m=+805.987543373" lastFinishedPulling="2026-01-05 20:18:49.491830689 +0000 UTC m=+816.201014563" observedRunningTime="2026-01-05 20:18:50.511480912 +0000 UTC m=+817.220664796" watchObservedRunningTime="2026-01-05 20:18:50.515530567 +0000 UTC m=+817.224714451" Jan 05 20:18:50 crc kubenswrapper[4754]: I0105 20:18:50.599451 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" podStartSLOduration=2.518412064 podStartE2EDuration="18.599419124s" podCreationTimestamp="2026-01-05 20:18:32 +0000 UTC" firstStartedPulling="2026-01-05 20:18:33.411672741 +0000 UTC m=+800.120856655" lastFinishedPulling="2026-01-05 20:18:49.492679841 +0000 UTC m=+816.201863715" observedRunningTime="2026-01-05 20:18:50.587702169 +0000 UTC m=+817.296886043" watchObservedRunningTime="2026-01-05 20:18:50.599419124 +0000 UTC m=+817.308602998" Jan 05 20:18:51 crc kubenswrapper[4754]: I0105 20:18:51.498117 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerID="58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53" exitCode=0 Jan 05 20:18:51 crc kubenswrapper[4754]: I0105 20:18:51.498303 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerDied","Data":"58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53"} Jan 05 20:18:51 crc kubenswrapper[4754]: I0105 20:18:51.500503 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerStarted","Data":"ba6b642ee4b4cf4dadad02348026fafcff3b204200787ea9bce2b252b271e526"} Jan 05 20:18:52 crc kubenswrapper[4754]: I0105 20:18:52.525395 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerStarted","Data":"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed"} Jan 05 20:18:52 crc kubenswrapper[4754]: I0105 20:18:52.529132 4754 generic.go:334] "Generic (PLEG): container finished" podID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerID="ba6b642ee4b4cf4dadad02348026fafcff3b204200787ea9bce2b252b271e526" exitCode=0 Jan 05 20:18:52 crc kubenswrapper[4754]: I0105 20:18:52.529282 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerDied","Data":"ba6b642ee4b4cf4dadad02348026fafcff3b204200787ea9bce2b252b271e526"} Jan 05 20:18:52 crc kubenswrapper[4754]: I0105 20:18:52.545646 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pb6x5" podStartSLOduration=3.138565299 podStartE2EDuration="4.545631442s" podCreationTimestamp="2026-01-05 20:18:48 +0000 UTC" firstStartedPulling="2026-01-05 20:18:50.485542655 +0000 UTC m=+817.194726539" lastFinishedPulling="2026-01-05 20:18:51.892608808 +0000 UTC m=+818.601792682" observedRunningTime="2026-01-05 20:18:52.54211249 +0000 UTC m=+819.251296364" watchObservedRunningTime="2026-01-05 20:18:52.545631442 +0000 UTC m=+819.254815316" Jan 05 20:18:53 crc kubenswrapper[4754]: I0105 20:18:53.538380 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerStarted","Data":"29877748cfbbf9f26c60b795be6d470055bbeb566fce798673ae2a5aa97e1b3b"} Jan 05 20:18:53 crc kubenswrapper[4754]: I0105 20:18:53.569201 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhnx6" podStartSLOduration=7.107781538 podStartE2EDuration="9.569181977s" podCreationTimestamp="2026-01-05 20:18:44 +0000 UTC" firstStartedPulling="2026-01-05 20:18:50.491274085 +0000 UTC m=+817.200457969" lastFinishedPulling="2026-01-05 20:18:52.952674524 +0000 UTC m=+819.661858408" observedRunningTime="2026-01-05 20:18:53.565390668 +0000 UTC m=+820.274574552" watchObservedRunningTime="2026-01-05 20:18:53.569181977 +0000 UTC m=+820.278365851" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.427921 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.428084 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.589596 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.590885 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.594937 4754 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-9jt6h" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.595069 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.595491 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.608654 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.692919 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb917535-b85b-411d-bf84-71901d1b38cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb917535-b85b-411d-bf84-71901d1b38cb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.692989 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczrb\" (UniqueName: \"kubernetes.io/projected/97fb8172-be9b-4d06-a662-9664c7fe4ba6-kube-api-access-mczrb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.795099 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb917535-b85b-411d-bf84-71901d1b38cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb917535-b85b-411d-bf84-71901d1b38cb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.795172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczrb\" (UniqueName: \"kubernetes.io/projected/97fb8172-be9b-4d06-a662-9664c7fe4ba6-kube-api-access-mczrb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.798369 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.798400 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb917535-b85b-411d-bf84-71901d1b38cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb917535-b85b-411d-bf84-71901d1b38cb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6dc1512c7905f14f9b2c9e15b8703bec569a0e8371f7633d007270e8f004e9cc/globalmount\"" pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.831022 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb917535-b85b-411d-bf84-71901d1b38cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb917535-b85b-411d-bf84-71901d1b38cb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.831905 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczrb\" (UniqueName: \"kubernetes.io/projected/97fb8172-be9b-4d06-a662-9664c7fe4ba6-kube-api-access-mczrb\") pod \"minio\" (UID: \"97fb8172-be9b-4d06-a662-9664c7fe4ba6\") " pod="minio-dev/minio" Jan 05 20:18:54 crc kubenswrapper[4754]: I0105 20:18:54.911903 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 05 20:18:55 crc kubenswrapper[4754]: I0105 20:18:55.400596 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 05 20:18:55 crc kubenswrapper[4754]: W0105 20:18:55.412589 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97fb8172_be9b_4d06_a662_9664c7fe4ba6.slice/crio-2cd11b6c702308ff25a8f02c54e09fead697eba76087ad45e7e6f28ae8bfc19f WatchSource:0}: Error finding container 2cd11b6c702308ff25a8f02c54e09fead697eba76087ad45e7e6f28ae8bfc19f: Status 404 returned error can't find the container with id 2cd11b6c702308ff25a8f02c54e09fead697eba76087ad45e7e6f28ae8bfc19f Jan 05 20:18:55 crc kubenswrapper[4754]: I0105 20:18:55.469562 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qhnx6" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="registry-server" probeResult="failure" output=< Jan 05 20:18:55 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:18:55 crc kubenswrapper[4754]: > Jan 05 20:18:55 crc kubenswrapper[4754]: I0105 20:18:55.551276 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"97fb8172-be9b-4d06-a662-9664c7fe4ba6","Type":"ContainerStarted","Data":"2cd11b6c702308ff25a8f02c54e09fead697eba76087ad45e7e6f28ae8bfc19f"} Jan 05 20:18:59 crc kubenswrapper[4754]: I0105 20:18:59.049829 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:59 crc kubenswrapper[4754]: I0105 20:18:59.050371 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:59 crc kubenswrapper[4754]: I0105 20:18:59.102748 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:18:59 crc kubenswrapper[4754]: I0105 20:18:59.683148 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:19:00 crc kubenswrapper[4754]: I0105 20:19:00.606005 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"97fb8172-be9b-4d06-a662-9664c7fe4ba6","Type":"ContainerStarted","Data":"b6d2f90753b2b5365762957cc7252080fdf9fd84751ba330cff02e63beef3f44"} Jan 05 20:19:00 crc kubenswrapper[4754]: I0105 20:19:00.625714 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.699954777 podStartE2EDuration="8.625694123s" podCreationTimestamp="2026-01-05 20:18:52 +0000 UTC" firstStartedPulling="2026-01-05 20:18:55.41470491 +0000 UTC m=+822.123888784" lastFinishedPulling="2026-01-05 20:19:00.340444246 +0000 UTC m=+827.049628130" observedRunningTime="2026-01-05 20:19:00.624672746 +0000 UTC m=+827.333856630" watchObservedRunningTime="2026-01-05 20:19:00.625694123 +0000 UTC m=+827.334878007" Jan 05 20:19:02 crc kubenswrapper[4754]: I0105 20:19:02.083364 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:19:02 crc kubenswrapper[4754]: I0105 20:19:02.083868 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pb6x5" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="registry-server" containerID="cri-o://eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed" gracePeriod=2 Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.615896 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.644224 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerID="eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed" exitCode=0 Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.644313 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerDied","Data":"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed"} Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.644390 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pb6x5" event={"ID":"cf45fc46-ce2c-4415-884a-8b64733632ed","Type":"ContainerDied","Data":"7a9ea4c19c3db971c935c85fe6cc5d51a0b154b4356823e8abecf525986d69ac"} Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.644416 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pb6x5" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.644445 4754 scope.go:117] "RemoveContainer" containerID="eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.694504 4754 scope.go:117] "RemoveContainer" containerID="58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.712017 4754 scope.go:117] "RemoveContainer" containerID="93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.749842 4754 scope.go:117] "RemoveContainer" containerID="eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed" Jan 05 20:19:03 crc kubenswrapper[4754]: E0105 20:19:03.750733 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed\": container with ID starting with eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed not found: ID does not exist" containerID="eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.750792 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed"} err="failed to get container status \"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed\": rpc error: code = NotFound desc = could not find container \"eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed\": container with ID starting with eef9eed7254d5e66ac516c79db84db74f118b283205a665bfafb7e95a92142ed not found: ID does not exist" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.750820 4754 scope.go:117] "RemoveContainer" containerID="58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53" Jan 05 20:19:03 crc kubenswrapper[4754]: E0105 20:19:03.751388 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53\": container with ID starting with 58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53 not found: ID does not exist" containerID="58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.751456 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53"} err="failed to get container status \"58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53\": rpc error: code = NotFound desc = could not find container \"58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53\": container with ID starting with 58b2a5b128dbc8f6bd0deab72cec612d0efd9d3c7b80a9cd8ea7fb21b8dc4d53 not found: ID does not exist" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.751504 4754 scope.go:117] "RemoveContainer" containerID="93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733" Jan 05 20:19:03 crc kubenswrapper[4754]: E0105 20:19:03.751968 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733\": container with ID starting with 93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733 not found: ID does not exist" containerID="93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.752136 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733"} err="failed to get container status \"93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733\": rpc error: code = NotFound desc = could not find container \"93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733\": container with ID starting with 93aa303e669388a0614216a877cda1646bcd649cbe8f93668fd6fec0dcb17733 not found: ID does not exist" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.768849 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8kk8\" (UniqueName: \"kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8\") pod \"cf45fc46-ce2c-4415-884a-8b64733632ed\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.768963 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities\") pod \"cf45fc46-ce2c-4415-884a-8b64733632ed\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.769118 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content\") pod \"cf45fc46-ce2c-4415-884a-8b64733632ed\" (UID: \"cf45fc46-ce2c-4415-884a-8b64733632ed\") " Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.771390 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities" (OuterVolumeSpecName: "utilities") pod "cf45fc46-ce2c-4415-884a-8b64733632ed" (UID: "cf45fc46-ce2c-4415-884a-8b64733632ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.776167 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8" (OuterVolumeSpecName: "kube-api-access-c8kk8") pod "cf45fc46-ce2c-4415-884a-8b64733632ed" (UID: "cf45fc46-ce2c-4415-884a-8b64733632ed"). InnerVolumeSpecName "kube-api-access-c8kk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.817828 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf45fc46-ce2c-4415-884a-8b64733632ed" (UID: "cf45fc46-ce2c-4415-884a-8b64733632ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.871267 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8kk8\" (UniqueName: \"kubernetes.io/projected/cf45fc46-ce2c-4415-884a-8b64733632ed-kube-api-access-c8kk8\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.871360 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:03 crc kubenswrapper[4754]: I0105 20:19:03.871403 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf45fc46-ce2c-4415-884a-8b64733632ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:04 crc kubenswrapper[4754]: I0105 20:19:04.003376 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:19:04 crc kubenswrapper[4754]: I0105 20:19:04.011269 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pb6x5"] Jan 05 20:19:04 crc kubenswrapper[4754]: I0105 20:19:04.471955 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:19:04 crc kubenswrapper[4754]: I0105 20:19:04.508474 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.607523 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" path="/var/lib/kubelet/pods/cf45fc46-ce2c-4415-884a-8b64733632ed/volumes" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.608673 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx"] Jan 05 20:19:05 crc kubenswrapper[4754]: E0105 20:19:05.609039 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="registry-server" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.609063 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="registry-server" Jan 05 20:19:05 crc kubenswrapper[4754]: E0105 20:19:05.609089 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="extract-utilities" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.609103 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="extract-utilities" Jan 05 20:19:05 crc kubenswrapper[4754]: E0105 20:19:05.609129 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="extract-content" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.609143 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="extract-content" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.609445 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf45fc46-ce2c-4415-884a-8b64733632ed" containerName="registry-server" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.610283 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.617556 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.617656 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-sdc84" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.617782 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.618026 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.618209 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.682528 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx"] Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.703719 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.703811 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.703841 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-config\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.703879 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlndm\" (UniqueName: \"kubernetes.io/projected/4ebdfefc-77a3-4dca-a664-5468209724ec-kube-api-access-wlndm\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.703961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.805216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.805280 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.805348 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-config\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.805375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlndm\" (UniqueName: \"kubernetes.io/projected/4ebdfefc-77a3-4dca-a664-5468209724ec-kube-api-access-wlndm\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.805425 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.806359 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.806577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ebdfefc-77a3-4dca-a664-5468209724ec-config\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.813769 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.814231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/4ebdfefc-77a3-4dca-a664-5468209724ec-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.831665 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-qfwrj"] Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.832594 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.833696 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlndm\" (UniqueName: \"kubernetes.io/projected/4ebdfefc-77a3-4dca-a664-5468209724ec-kube-api-access-wlndm\") pod \"logging-loki-distributor-5f678c8dd6-pd7zx\" (UID: \"4ebdfefc-77a3-4dca-a664-5468209724ec\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.834190 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.834377 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.834694 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.847261 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-qfwrj"] Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.933429 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-swdgc"] Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.934288 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.940106 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.941005 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.959079 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-swdgc"] Jan 05 20:19:05 crc kubenswrapper[4754]: I0105 20:19:05.964266 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011179 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011206 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-s3\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011242 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxwk\" (UniqueName: \"kubernetes.io/projected/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-kube-api-access-tfxwk\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.011310 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-config\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.052675 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-k6trg"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.053864 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.058820 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.059049 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.059221 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.059400 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.059528 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.085698 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-k6trg"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.100877 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-c4rps"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.108814 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.113427 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-c4rps"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.113685 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-db2xv" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114230 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114285 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114327 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxwk\" (UniqueName: \"kubernetes.io/projected/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-kube-api-access-tfxwk\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114772 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114808 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-config\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114840 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-config\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114872 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114897 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114916 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114935 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4sn\" (UniqueName: \"kubernetes.io/projected/dbe68fed-2285-4e97-9c3d-d9fb903dc682-kube-api-access-kd4sn\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.114954 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-s3\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.117018 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.120632 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-config\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.132230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-s3\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.137205 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.143727 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxwk\" (UniqueName: \"kubernetes.io/projected/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-kube-api-access-tfxwk\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.144653 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d8b747f5-71f0-48b5-aae8-375ef3d8ef00-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-qfwrj\" (UID: \"d8b747f5-71f0-48b5-aae8-375ef3d8ef00\") " pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.169993 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.216328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.216401 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.216436 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.216463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-config\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.216485 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217626 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-config\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217770 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tenants\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217804 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217843 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.217970 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218065 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4sn\" (UniqueName: \"kubernetes.io/projected/dbe68fed-2285-4e97-9c3d-d9fb903dc682-kube-api-access-kd4sn\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218117 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-rbac\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218143 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tenants\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218170 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218195 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218230 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb87x\" (UniqueName: \"kubernetes.io/projected/cef8ee76-7c6e-420e-8c38-a7ad816cd513-kube-api-access-hb87x\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218355 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-rbac\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218384 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218412 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.218450 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpmxn\" (UniqueName: \"kubernetes.io/projected/85a07def-c26c-49aa-ae32-c7772e9ebecc-kube-api-access-rpmxn\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.221039 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.226350 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.226501 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/dbe68fed-2285-4e97-9c3d-d9fb903dc682-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.245069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4sn\" (UniqueName: \"kubernetes.io/projected/dbe68fed-2285-4e97-9c3d-d9fb903dc682-kube-api-access-kd4sn\") pod \"logging-loki-query-frontend-69d9546745-swdgc\" (UID: \"dbe68fed-2285-4e97-9c3d-d9fb903dc682\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.253839 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323139 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323218 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpmxn\" (UniqueName: \"kubernetes.io/projected/85a07def-c26c-49aa-ae32-c7772e9ebecc-kube-api-access-rpmxn\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323268 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323326 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323371 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323391 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tenants\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323438 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323458 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323483 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-rbac\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323504 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tenants\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323525 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323548 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323574 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb87x\" (UniqueName: \"kubernetes.io/projected/cef8ee76-7c6e-420e-8c38-a7ad816cd513-kube-api-access-hb87x\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.323598 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-rbac\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.324577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-rbac\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: E0105 20:19:06.324673 4754 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 05 20:19:06 crc kubenswrapper[4754]: E0105 20:19:06.324730 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret podName:cef8ee76-7c6e-420e-8c38-a7ad816cd513 nodeName:}" failed. No retries permitted until 2026-01-05 20:19:06.824711188 +0000 UTC m=+833.533895062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret") pod "logging-loki-gateway-586cd7f6-k6trg" (UID: "cef8ee76-7c6e-420e-8c38-a7ad816cd513") : secret "logging-loki-gateway-http" not found Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.330348 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.331229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.331894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: E0105 20:19:06.332346 4754 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 05 20:19:06 crc kubenswrapper[4754]: E0105 20:19:06.332409 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret podName:85a07def-c26c-49aa-ae32-c7772e9ebecc nodeName:}" failed. No retries permitted until 2026-01-05 20:19:06.832390438 +0000 UTC m=+833.541574312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret") pod "logging-loki-gateway-586cd7f6-c4rps" (UID: "85a07def-c26c-49aa-ae32-c7772e9ebecc") : secret "logging-loki-gateway-http" not found Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.333009 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-rbac\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.333479 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.334033 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-lokistack-gateway\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.346135 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.346134 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tenants\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.346785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tenants\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.347055 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.347785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a07def-c26c-49aa-ae32-c7772e9ebecc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.371103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb87x\" (UniqueName: \"kubernetes.io/projected/cef8ee76-7c6e-420e-8c38-a7ad816cd513-kube-api-access-hb87x\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.387211 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpmxn\" (UniqueName: \"kubernetes.io/projected/85a07def-c26c-49aa-ae32-c7772e9ebecc-kube-api-access-rpmxn\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.552480 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.668059 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" event={"ID":"4ebdfefc-77a3-4dca-a664-5468209724ec","Type":"ContainerStarted","Data":"377a84b68b4cc8fde26204e0e1edc3a544afbc45bfb7e6097ad730966c22fbd4"} Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.753053 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-qfwrj"] Jan 05 20:19:06 crc kubenswrapper[4754]: W0105 20:19:06.758505 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b747f5_71f0_48b5_aae8_375ef3d8ef00.slice/crio-5609ff415bb76c461089eb3543914122fc5a9eb943bfb0e98c8ad0588cb38f9c WatchSource:0}: Error finding container 5609ff415bb76c461089eb3543914122fc5a9eb943bfb0e98c8ad0588cb38f9c: Status 404 returned error can't find the container with id 5609ff415bb76c461089eb3543914122fc5a9eb943bfb0e98c8ad0588cb38f9c Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.808027 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.809003 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.810754 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.810757 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.819413 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.832963 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.833049 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.838221 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/85a07def-c26c-49aa-ae32-c7772e9ebecc-tls-secret\") pod \"logging-loki-gateway-586cd7f6-c4rps\" (UID: \"85a07def-c26c-49aa-ae32-c7772e9ebecc\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.839036 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef8ee76-7c6e-420e-8c38-a7ad816cd513-tls-secret\") pod \"logging-loki-gateway-586cd7f6-k6trg\" (UID: \"cef8ee76-7c6e-420e-8c38-a7ad816cd513\") " pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.843483 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-swdgc"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.893090 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.894126 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.896088 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.896217 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.907834 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934197 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934260 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934281 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934642 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzfm\" (UniqueName: \"kubernetes.io/projected/7f8103c6-3d68-4568-ae3b-89f606aa116a-kube-api-access-rbzfm\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934724 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-config\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.934925 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.935147 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.977224 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.994903 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.995895 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.997962 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 05 20:19:06 crc kubenswrapper[4754]: I0105 20:19:06.999381 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.011252 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.036573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.036755 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.036938 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/8970d80f-9277-46ca-ba45-c09e3362c3e2-kube-api-access-lcb54\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037008 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037531 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037656 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037740 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037826 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037898 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzfm\" (UniqueName: \"kubernetes.io/projected/7f8103c6-3d68-4568-ae3b-89f606aa116a-kube-api-access-rbzfm\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.037983 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-config\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038081 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038166 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038246 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038364 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038527 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.038730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.040950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8103c6-3d68-4568-ae3b-89f606aa116a-config\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.045234 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.047991 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.048205 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.048256 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b434431202c9862dcb717ed72d09318503f4f18b4775100ed205e903abbd8615/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.048337 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.048389 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2039aed9cc1c8bb6656f8898e41ddada4c7ca0f57cd19aab9d85cf93b1c9277c/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.049387 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f8103c6-3d68-4568-ae3b-89f606aa116a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.069224 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzfm\" (UniqueName: \"kubernetes.io/projected/7f8103c6-3d68-4568-ae3b-89f606aa116a-kube-api-access-rbzfm\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.069777 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.088544 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba284f4-5096-44f3-9bb9-5d557d658daa\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.103073 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-faf8917e-385e-440b-8ac0-2ddd7c6c7a5a\") pod \"logging-loki-ingester-0\" (UID: \"7f8103c6-3d68-4568-ae3b-89f606aa116a\") " pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141459 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-config\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141512 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141536 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89q99\" (UniqueName: \"kubernetes.io/projected/85b1b29c-72d4-41ff-8185-f1cd738be7db-kube-api-access-89q99\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141604 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141654 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141679 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141712 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141732 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141753 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141774 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/8970d80f-9277-46ca-ba45-c09e3362c3e2-kube-api-access-lcb54\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141810 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141839 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.141864 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.142537 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.143392 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8970d80f-9277-46ca-ba45-c09e3362c3e2-config\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.144762 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.144795 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac019d64ae73cedac6047c4c42f63593bafba9ca07124e750e1282ab8b8a27fb/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.147537 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.149257 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.149744 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8970d80f-9277-46ca-ba45-c09e3362c3e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.162591 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/8970d80f-9277-46ca-ba45-c09e3362c3e2-kube-api-access-lcb54\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.175999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c40e018-a3f4-4153-bab5-ba3fd74aefe6\") pod \"logging-loki-compactor-0\" (UID: \"8970d80f-9277-46ca-ba45-c09e3362c3e2\") " pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.193147 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.207775 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243247 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243377 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243424 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-config\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243445 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89q99\" (UniqueName: \"kubernetes.io/projected/85b1b29c-72d4-41ff-8185-f1cd738be7db-kube-api-access-89q99\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.243540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.245385 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.247970 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.247986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b1b29c-72d4-41ff-8185-f1cd738be7db-config\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.248004 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c64384520fc3d6f6ea52e131b7adc645e0b5de8eebb9f1a08f23cb6d83eb593e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.248500 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.250496 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.263169 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/85b1b29c-72d4-41ff-8185-f1cd738be7db-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.267457 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89q99\" (UniqueName: \"kubernetes.io/projected/85b1b29c-72d4-41ff-8185-f1cd738be7db-kube-api-access-89q99\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.273134 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc81a5f-be2e-4674-9a82-3ce7473f0c71\") pod \"logging-loki-index-gateway-0\" (UID: \"85b1b29c-72d4-41ff-8185-f1cd738be7db\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.315359 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-c4rps"] Jan 05 20:19:07 crc kubenswrapper[4754]: W0105 20:19:07.319892 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85a07def_c26c_49aa_ae32_c7772e9ebecc.slice/crio-a952406ee1efb6a74968e65d9bf61fcb74e6f5209198765abc872be9ea1a3b58 WatchSource:0}: Error finding container a952406ee1efb6a74968e65d9bf61fcb74e6f5209198765abc872be9ea1a3b58: Status 404 returned error can't find the container with id a952406ee1efb6a74968e65d9bf61fcb74e6f5209198765abc872be9ea1a3b58 Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.374895 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.430078 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-586cd7f6-k6trg"] Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.441251 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.681314 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.681743 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhnx6" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="registry-server" containerID="cri-o://29877748cfbbf9f26c60b795be6d470055bbeb566fce798673ae2a5aa97e1b3b" gracePeriod=2 Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.683626 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" event={"ID":"cef8ee76-7c6e-420e-8c38-a7ad816cd513","Type":"ContainerStarted","Data":"b1827e94124070384e51430f40a005ba5ef44d0ceed02fc54e02371b5c425dfe"} Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.684947 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" event={"ID":"dbe68fed-2285-4e97-9c3d-d9fb903dc682","Type":"ContainerStarted","Data":"71c0bda4857a7915b9f7c68038900f47ef6fc7dfbc1990ca9a9c6613daf920dc"} Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.690044 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"7f8103c6-3d68-4568-ae3b-89f606aa116a","Type":"ContainerStarted","Data":"748087453d11f75c7bcfa7077e07a21ca689b0ab359197728bcb0e3830c3e327"} Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.692445 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" event={"ID":"d8b747f5-71f0-48b5-aae8-375ef3d8ef00","Type":"ContainerStarted","Data":"5609ff415bb76c461089eb3543914122fc5a9eb943bfb0e98c8ad0588cb38f9c"} Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.694243 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" event={"ID":"85a07def-c26c-49aa-ae32-c7772e9ebecc","Type":"ContainerStarted","Data":"a952406ee1efb6a74968e65d9bf61fcb74e6f5209198765abc872be9ea1a3b58"} Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.739965 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 05 20:19:07 crc kubenswrapper[4754]: W0105 20:19:07.741513 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8970d80f_9277_46ca_ba45_c09e3362c3e2.slice/crio-12354477650e76b00a092fa4d915b937ef4149ae8cec4f17e18ad335aef1363b WatchSource:0}: Error finding container 12354477650e76b00a092fa4d915b937ef4149ae8cec4f17e18ad335aef1363b: Status 404 returned error can't find the container with id 12354477650e76b00a092fa4d915b937ef4149ae8cec4f17e18ad335aef1363b Jan 05 20:19:07 crc kubenswrapper[4754]: I0105 20:19:07.849945 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 05 20:19:07 crc kubenswrapper[4754]: W0105 20:19:07.864260 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b1b29c_72d4_41ff_8185_f1cd738be7db.slice/crio-4fe646c3232274c211ae9bf8e221a7d7417c4734f5f1ddb33342410e729f8d09 WatchSource:0}: Error finding container 4fe646c3232274c211ae9bf8e221a7d7417c4734f5f1ddb33342410e729f8d09: Status 404 returned error can't find the container with id 4fe646c3232274c211ae9bf8e221a7d7417c4734f5f1ddb33342410e729f8d09 Jan 05 20:19:08 crc kubenswrapper[4754]: I0105 20:19:08.700584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8970d80f-9277-46ca-ba45-c09e3362c3e2","Type":"ContainerStarted","Data":"12354477650e76b00a092fa4d915b937ef4149ae8cec4f17e18ad335aef1363b"} Jan 05 20:19:08 crc kubenswrapper[4754]: I0105 20:19:08.701990 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"85b1b29c-72d4-41ff-8185-f1cd738be7db","Type":"ContainerStarted","Data":"4fe646c3232274c211ae9bf8e221a7d7417c4734f5f1ddb33342410e729f8d09"} Jan 05 20:19:09 crc kubenswrapper[4754]: I0105 20:19:09.713853 4754 generic.go:334] "Generic (PLEG): container finished" podID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerID="29877748cfbbf9f26c60b795be6d470055bbeb566fce798673ae2a5aa97e1b3b" exitCode=0 Jan 05 20:19:09 crc kubenswrapper[4754]: I0105 20:19:09.713910 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerDied","Data":"29877748cfbbf9f26c60b795be6d470055bbeb566fce798673ae2a5aa97e1b3b"} Jan 05 20:19:09 crc kubenswrapper[4754]: I0105 20:19:09.971712 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.108923 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities\") pod \"dc7e879c-fcf1-4574-ae78-222ad322e725\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.109069 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq9l6\" (UniqueName: \"kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6\") pod \"dc7e879c-fcf1-4574-ae78-222ad322e725\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.109135 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content\") pod \"dc7e879c-fcf1-4574-ae78-222ad322e725\" (UID: \"dc7e879c-fcf1-4574-ae78-222ad322e725\") " Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.110450 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities" (OuterVolumeSpecName: "utilities") pod "dc7e879c-fcf1-4574-ae78-222ad322e725" (UID: "dc7e879c-fcf1-4574-ae78-222ad322e725"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.115459 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6" (OuterVolumeSpecName: "kube-api-access-bq9l6") pod "dc7e879c-fcf1-4574-ae78-222ad322e725" (UID: "dc7e879c-fcf1-4574-ae78-222ad322e725"). InnerVolumeSpecName "kube-api-access-bq9l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.165265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc7e879c-fcf1-4574-ae78-222ad322e725" (UID: "dc7e879c-fcf1-4574-ae78-222ad322e725"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.219009 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq9l6\" (UniqueName: \"kubernetes.io/projected/dc7e879c-fcf1-4574-ae78-222ad322e725-kube-api-access-bq9l6\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.219053 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.219063 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc7e879c-fcf1-4574-ae78-222ad322e725-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.726993 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhnx6" event={"ID":"dc7e879c-fcf1-4574-ae78-222ad322e725","Type":"ContainerDied","Data":"f004c69451eea8070a4fb791f176dba2d60956a49d7b0fc551acd29e40ce48fe"} Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.727071 4754 scope.go:117] "RemoveContainer" containerID="29877748cfbbf9f26c60b795be6d470055bbeb566fce798673ae2a5aa97e1b3b" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.727109 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhnx6" Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.761604 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:19:10 crc kubenswrapper[4754]: I0105 20:19:10.767893 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhnx6"] Jan 05 20:19:11 crc kubenswrapper[4754]: I0105 20:19:11.601457 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" path="/var/lib/kubelet/pods/dc7e879c-fcf1-4574-ae78-222ad322e725/volumes" Jan 05 20:19:12 crc kubenswrapper[4754]: I0105 20:19:12.273527 4754 scope.go:117] "RemoveContainer" containerID="ba6b642ee4b4cf4dadad02348026fafcff3b204200787ea9bce2b252b271e526" Jan 05 20:19:12 crc kubenswrapper[4754]: I0105 20:19:12.392137 4754 scope.go:117] "RemoveContainer" containerID="02089294e4bd9e828ffd6949fb0b72cf0fa87ede7e178a935e0df3a17733c62e" Jan 05 20:19:12 crc kubenswrapper[4754]: I0105 20:19:12.758060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" event={"ID":"4ebdfefc-77a3-4dca-a664-5468209724ec","Type":"ContainerStarted","Data":"08c0c7f450e828a4dc967e7f37fd22a168938ea4db36aad3f64e786fb5821e41"} Jan 05 20:19:12 crc kubenswrapper[4754]: I0105 20:19:12.758824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:12 crc kubenswrapper[4754]: I0105 20:19:12.794724 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" podStartSLOduration=1.911710841 podStartE2EDuration="7.794706293s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:06.557870836 +0000 UTC m=+833.267054710" lastFinishedPulling="2026-01-05 20:19:12.440866278 +0000 UTC m=+839.150050162" observedRunningTime="2026-01-05 20:19:12.789170398 +0000 UTC m=+839.498354272" watchObservedRunningTime="2026-01-05 20:19:12.794706293 +0000 UTC m=+839.503890167" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.806571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"7f8103c6-3d68-4568-ae3b-89f606aa116a","Type":"ContainerStarted","Data":"c35e2772faa433d451ccf9b0dca69d45933bb6a2910af1982b91756d4016f572"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.807643 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.813886 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8970d80f-9277-46ca-ba45-c09e3362c3e2","Type":"ContainerStarted","Data":"2fddf667d2f5c176e301460a54fcea8d79e9c9346c54e5ce13bfafbedb3c42f6"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.815048 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.818572 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" event={"ID":"85a07def-c26c-49aa-ae32-c7772e9ebecc","Type":"ContainerStarted","Data":"3ad875ede76e3f1c995bfee6222fe9b64d3251814ad0c2bc156d76ddb2bfb526"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.820640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" event={"ID":"cef8ee76-7c6e-420e-8c38-a7ad816cd513","Type":"ContainerStarted","Data":"59c412f83ce7554887c8ba1b6f2acd821b496967fcbb4ece446f3994bbfe0d52"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.823939 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" event={"ID":"dbe68fed-2285-4e97-9c3d-d9fb903dc682","Type":"ContainerStarted","Data":"d7d2119b4a8f880650739a047555866e291b654bebedf6df19bf34a5e9cf1a70"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.824117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.826570 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"85b1b29c-72d4-41ff-8185-f1cd738be7db","Type":"ContainerStarted","Data":"cd5bc77a51aeb8130d7060bdd65ff102808ba7353d50beabb49960017be4fd5e"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.827570 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.836376 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" event={"ID":"d8b747f5-71f0-48b5-aae8-375ef3d8ef00","Type":"ContainerStarted","Data":"5209a776985ac2edb0bee135603c5df09579d548a67887b86568931e49311854"} Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.836461 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.838227 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.773814458 podStartE2EDuration="8.838201337s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:07.454247826 +0000 UTC m=+834.163431700" lastFinishedPulling="2026-01-05 20:19:12.518634665 +0000 UTC m=+839.227818579" observedRunningTime="2026-01-05 20:19:13.830499947 +0000 UTC m=+840.539683831" watchObservedRunningTime="2026-01-05 20:19:13.838201337 +0000 UTC m=+840.547385211" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.860996 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=4.235711279 podStartE2EDuration="8.860976641s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:07.866458552 +0000 UTC m=+834.575642426" lastFinishedPulling="2026-01-05 20:19:12.491723914 +0000 UTC m=+839.200907788" observedRunningTime="2026-01-05 20:19:13.860590951 +0000 UTC m=+840.569774815" watchObservedRunningTime="2026-01-05 20:19:13.860976641 +0000 UTC m=+840.570160515" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.886526 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" podStartSLOduration=3.248722228 podStartE2EDuration="8.886507137s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:06.851177653 +0000 UTC m=+833.560361527" lastFinishedPulling="2026-01-05 20:19:12.488962562 +0000 UTC m=+839.198146436" observedRunningTime="2026-01-05 20:19:13.885642384 +0000 UTC m=+840.594826258" watchObservedRunningTime="2026-01-05 20:19:13.886507137 +0000 UTC m=+840.595691001" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.923838 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=4.187254256 podStartE2EDuration="8.923812219s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:07.746262219 +0000 UTC m=+834.455446093" lastFinishedPulling="2026-01-05 20:19:12.482820182 +0000 UTC m=+839.192004056" observedRunningTime="2026-01-05 20:19:13.920477662 +0000 UTC m=+840.629661536" watchObservedRunningTime="2026-01-05 20:19:13.923812219 +0000 UTC m=+840.632996093" Jan 05 20:19:13 crc kubenswrapper[4754]: I0105 20:19:13.944922 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" podStartSLOduration=3.187116251 podStartE2EDuration="8.944905989s" podCreationTimestamp="2026-01-05 20:19:05 +0000 UTC" firstStartedPulling="2026-01-05 20:19:06.760897889 +0000 UTC m=+833.470081763" lastFinishedPulling="2026-01-05 20:19:12.518687627 +0000 UTC m=+839.227871501" observedRunningTime="2026-01-05 20:19:13.944504649 +0000 UTC m=+840.653688523" watchObservedRunningTime="2026-01-05 20:19:13.944905989 +0000 UTC m=+840.654089863" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.853081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" event={"ID":"85a07def-c26c-49aa-ae32-c7772e9ebecc","Type":"ContainerStarted","Data":"450fbd18fd75e0dd7141f4b05e577432a9551647aede631a393abd1703616d75"} Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.856009 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.856121 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.859627 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" event={"ID":"cef8ee76-7c6e-420e-8c38-a7ad816cd513","Type":"ContainerStarted","Data":"3bddec60699a51caf2da732fbcdb703a4ab02f7be4be9e188dfaa0cf22e514cb"} Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.859658 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.860586 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.871841 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.875053 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.877753 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.879254 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.884467 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podStartSLOduration=2.254439047 podStartE2EDuration="9.884455614s" podCreationTimestamp="2026-01-05 20:19:06 +0000 UTC" firstStartedPulling="2026-01-05 20:19:07.324037851 +0000 UTC m=+834.033221715" lastFinishedPulling="2026-01-05 20:19:14.954054408 +0000 UTC m=+841.663238282" observedRunningTime="2026-01-05 20:19:15.880233204 +0000 UTC m=+842.589417118" watchObservedRunningTime="2026-01-05 20:19:15.884455614 +0000 UTC m=+842.593639488" Jan 05 20:19:15 crc kubenswrapper[4754]: I0105 20:19:15.917253 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podStartSLOduration=2.401234333 podStartE2EDuration="9.917235638s" podCreationTimestamp="2026-01-05 20:19:06 +0000 UTC" firstStartedPulling="2026-01-05 20:19:07.444985354 +0000 UTC m=+834.154169228" lastFinishedPulling="2026-01-05 20:19:14.960986659 +0000 UTC m=+841.670170533" observedRunningTime="2026-01-05 20:19:15.908410468 +0000 UTC m=+842.617594342" watchObservedRunningTime="2026-01-05 20:19:15.917235638 +0000 UTC m=+842.626419502" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.494200 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:25 crc kubenswrapper[4754]: E0105 20:19:25.495520 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="registry-server" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.495559 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="registry-server" Jan 05 20:19:25 crc kubenswrapper[4754]: E0105 20:19:25.495600 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="extract-utilities" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.495620 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="extract-utilities" Jan 05 20:19:25 crc kubenswrapper[4754]: E0105 20:19:25.495647 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="extract-content" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.495663 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="extract-content" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.495929 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7e879c-fcf1-4574-ae78-222ad322e725" containerName="registry-server" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.498134 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.529624 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.674601 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4xr\" (UniqueName: \"kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.674726 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.674776 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.775949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.776026 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.776070 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4xr\" (UniqueName: \"kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.776656 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.776832 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.806478 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4xr\" (UniqueName: \"kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr\") pod \"community-operators-6crbn\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:25 crc kubenswrapper[4754]: I0105 20:19:25.874811 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:26 crc kubenswrapper[4754]: I0105 20:19:26.401132 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:26 crc kubenswrapper[4754]: W0105 20:19:26.403440 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba45be27_3062_4012_b30a_2722f7e182bd.slice/crio-e1676d4e98673fcf286f564ac1f61204ef6d9e697e830ca6c59c563bf67877ca WatchSource:0}: Error finding container e1676d4e98673fcf286f564ac1f61204ef6d9e697e830ca6c59c563bf67877ca: Status 404 returned error can't find the container with id e1676d4e98673fcf286f564ac1f61204ef6d9e697e830ca6c59c563bf67877ca Jan 05 20:19:26 crc kubenswrapper[4754]: I0105 20:19:26.960159 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba45be27-3062-4012-b30a-2722f7e182bd" containerID="44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee" exitCode=0 Jan 05 20:19:26 crc kubenswrapper[4754]: I0105 20:19:26.960272 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerDied","Data":"44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee"} Jan 05 20:19:26 crc kubenswrapper[4754]: I0105 20:19:26.960773 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerStarted","Data":"e1676d4e98673fcf286f564ac1f61204ef6d9e697e830ca6c59c563bf67877ca"} Jan 05 20:19:27 crc kubenswrapper[4754]: I0105 20:19:27.207148 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 05 20:19:27 crc kubenswrapper[4754]: I0105 20:19:27.207224 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 05 20:19:27 crc kubenswrapper[4754]: I0105 20:19:27.216217 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 05 20:19:27 crc kubenswrapper[4754]: I0105 20:19:27.384926 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 05 20:19:28 crc kubenswrapper[4754]: I0105 20:19:28.980696 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba45be27-3062-4012-b30a-2722f7e182bd" containerID="afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab" exitCode=0 Jan 05 20:19:28 crc kubenswrapper[4754]: I0105 20:19:28.980829 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerDied","Data":"afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab"} Jan 05 20:19:29 crc kubenswrapper[4754]: I0105 20:19:29.991839 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerStarted","Data":"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3"} Jan 05 20:19:30 crc kubenswrapper[4754]: I0105 20:19:30.024073 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6crbn" podStartSLOduration=2.507016196 podStartE2EDuration="5.024037748s" podCreationTimestamp="2026-01-05 20:19:25 +0000 UTC" firstStartedPulling="2026-01-05 20:19:26.9628103 +0000 UTC m=+853.671994214" lastFinishedPulling="2026-01-05 20:19:29.479831842 +0000 UTC m=+856.189015766" observedRunningTime="2026-01-05 20:19:30.018186835 +0000 UTC m=+856.727370709" watchObservedRunningTime="2026-01-05 20:19:30.024037748 +0000 UTC m=+856.733221662" Jan 05 20:19:35 crc kubenswrapper[4754]: I0105 20:19:35.875736 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:35 crc kubenswrapper[4754]: I0105 20:19:35.876928 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:35 crc kubenswrapper[4754]: I0105 20:19:35.973560 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:35 crc kubenswrapper[4754]: I0105 20:19:35.978600 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" Jan 05 20:19:36 crc kubenswrapper[4754]: I0105 20:19:36.095681 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:36 crc kubenswrapper[4754]: I0105 20:19:36.182514 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" Jan 05 20:19:36 crc kubenswrapper[4754]: I0105 20:19:36.270201 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" Jan 05 20:19:37 crc kubenswrapper[4754]: I0105 20:19:37.201850 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 05 20:19:37 crc kubenswrapper[4754]: I0105 20:19:37.202613 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 05 20:19:38 crc kubenswrapper[4754]: I0105 20:19:38.275666 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:38 crc kubenswrapper[4754]: I0105 20:19:38.276077 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6crbn" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="registry-server" containerID="cri-o://0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3" gracePeriod=2 Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.864455 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.964000 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4xr\" (UniqueName: \"kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr\") pod \"ba45be27-3062-4012-b30a-2722f7e182bd\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.964071 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities\") pod \"ba45be27-3062-4012-b30a-2722f7e182bd\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.964246 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content\") pod \"ba45be27-3062-4012-b30a-2722f7e182bd\" (UID: \"ba45be27-3062-4012-b30a-2722f7e182bd\") " Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.966113 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities" (OuterVolumeSpecName: "utilities") pod "ba45be27-3062-4012-b30a-2722f7e182bd" (UID: "ba45be27-3062-4012-b30a-2722f7e182bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:39 crc kubenswrapper[4754]: I0105 20:19:39.972634 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr" (OuterVolumeSpecName: "kube-api-access-vw4xr") pod "ba45be27-3062-4012-b30a-2722f7e182bd" (UID: "ba45be27-3062-4012-b30a-2722f7e182bd"). InnerVolumeSpecName "kube-api-access-vw4xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.035079 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba45be27-3062-4012-b30a-2722f7e182bd" (UID: "ba45be27-3062-4012-b30a-2722f7e182bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.066747 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4xr\" (UniqueName: \"kubernetes.io/projected/ba45be27-3062-4012-b30a-2722f7e182bd-kube-api-access-vw4xr\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.066790 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.066802 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba45be27-3062-4012-b30a-2722f7e182bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.076018 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba45be27-3062-4012-b30a-2722f7e182bd" containerID="0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3" exitCode=0 Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.076097 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerDied","Data":"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3"} Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.076158 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crbn" event={"ID":"ba45be27-3062-4012-b30a-2722f7e182bd","Type":"ContainerDied","Data":"e1676d4e98673fcf286f564ac1f61204ef6d9e697e830ca6c59c563bf67877ca"} Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.076155 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crbn" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.076228 4754 scope.go:117] "RemoveContainer" containerID="0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.094811 4754 scope.go:117] "RemoveContainer" containerID="afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.130371 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.131328 4754 scope.go:117] "RemoveContainer" containerID="44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.134452 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6crbn"] Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.158326 4754 scope.go:117] "RemoveContainer" containerID="0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3" Jan 05 20:19:40 crc kubenswrapper[4754]: E0105 20:19:40.159106 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3\": container with ID starting with 0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3 not found: ID does not exist" containerID="0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.159182 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3"} err="failed to get container status \"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3\": rpc error: code = NotFound desc = could not find container \"0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3\": container with ID starting with 0ea469e469eee9f52781eb7e415eec22e2bc5545cf6f89f42cb41d2c7544fdc3 not found: ID does not exist" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.159229 4754 scope.go:117] "RemoveContainer" containerID="afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab" Jan 05 20:19:40 crc kubenswrapper[4754]: E0105 20:19:40.160152 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab\": container with ID starting with afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab not found: ID does not exist" containerID="afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.160181 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab"} err="failed to get container status \"afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab\": rpc error: code = NotFound desc = could not find container \"afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab\": container with ID starting with afc55893db9bb665ba83746a75c253bdea854561e0c3dcf66cb0a389123ec9ab not found: ID does not exist" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.160203 4754 scope.go:117] "RemoveContainer" containerID="44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee" Jan 05 20:19:40 crc kubenswrapper[4754]: E0105 20:19:40.160938 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee\": container with ID starting with 44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee not found: ID does not exist" containerID="44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee" Jan 05 20:19:40 crc kubenswrapper[4754]: I0105 20:19:40.161032 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee"} err="failed to get container status \"44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee\": rpc error: code = NotFound desc = could not find container \"44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee\": container with ID starting with 44f4eabb988512ca52c7506160fe1930cd12db8e04d7250290324b82410168ee not found: ID does not exist" Jan 05 20:19:41 crc kubenswrapper[4754]: I0105 20:19:41.601344 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" path="/var/lib/kubelet/pods/ba45be27-3062-4012-b30a-2722f7e182bd/volumes" Jan 05 20:19:47 crc kubenswrapper[4754]: I0105 20:19:47.198536 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 05 20:19:47 crc kubenswrapper[4754]: I0105 20:19:47.199260 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 05 20:19:57 crc kubenswrapper[4754]: I0105 20:19:57.204921 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 05 20:19:57 crc kubenswrapper[4754]: I0105 20:19:57.207601 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 05 20:20:07 crc kubenswrapper[4754]: I0105 20:20:07.204427 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 05 20:20:18 crc kubenswrapper[4754]: I0105 20:20:18.109762 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:20:18 crc kubenswrapper[4754]: I0105 20:20:18.110359 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.836560 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pltvd"] Jan 05 20:20:25 crc kubenswrapper[4754]: E0105 20:20:25.837988 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="extract-utilities" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.838009 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="extract-utilities" Jan 05 20:20:25 crc kubenswrapper[4754]: E0105 20:20:25.838041 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="extract-content" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.838050 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="extract-content" Jan 05 20:20:25 crc kubenswrapper[4754]: E0105 20:20:25.838061 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="registry-server" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.838069 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="registry-server" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.838261 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba45be27-3062-4012-b30a-2722f7e182bd" containerName="registry-server" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.839112 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.843596 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.843746 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qt8hp" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.846070 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.846442 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.846625 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.857124 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.859749 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pltvd"] Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.919735 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.923504 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.933985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934079 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934114 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934142 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqxs\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934174 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934227 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.934410 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:25 crc kubenswrapper[4754]: I0105 20:20:25.946662 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pltvd"] Jan 05 20:20:25 crc kubenswrapper[4754]: E0105 20:20:25.959071 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-plqxs metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-pltvd" podUID="37e04a9d-c744-44f4-9a1c-3f9d93263389" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036762 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036839 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036862 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036926 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036955 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.036996 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqxs\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.037016 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.037040 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.037071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.037091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.038368 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: E0105 20:20:26.038582 4754 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 05 20:20:26 crc kubenswrapper[4754]: E0105 20:20:26.038672 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics podName:37e04a9d-c744-44f4-9a1c-3f9d93263389 nodeName:}" failed. No retries permitted until 2026-01-05 20:20:26.5386422 +0000 UTC m=+913.247826074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics") pod "collector-pltvd" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389") : secret "collector-metrics" not found Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.040073 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.040958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.041358 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.042942 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.043229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.053510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.057145 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.059203 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.063334 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqxs\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.545714 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.551930 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") pod \"collector-pltvd\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.804283 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.820687 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pltvd" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.854724 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.854891 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqxs\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.854972 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855071 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855133 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855335 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855406 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855545 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config" (OuterVolumeSpecName: "config") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855717 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855792 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.855868 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.856003 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") pod \"37e04a9d-c744-44f4-9a1c-3f9d93263389\" (UID: \"37e04a9d-c744-44f4-9a1c-3f9d93263389\") " Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.856152 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir" (OuterVolumeSpecName: "datadir") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.856474 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.857354 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.857615 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.858200 4754 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.858257 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.858282 4754 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.858470 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e04a9d-c744-44f4-9a1c-3f9d93263389-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.858501 4754 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/37e04a9d-c744-44f4-9a1c-3f9d93263389-datadir\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.862074 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token" (OuterVolumeSpecName: "sa-token") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.862174 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.862992 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics" (OuterVolumeSpecName: "metrics") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.864382 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp" (OuterVolumeSpecName: "tmp") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.864448 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token" (OuterVolumeSpecName: "collector-token") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.866458 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs" (OuterVolumeSpecName: "kube-api-access-plqxs") pod "37e04a9d-c744-44f4-9a1c-3f9d93263389" (UID: "37e04a9d-c744-44f4-9a1c-3f9d93263389"). InnerVolumeSpecName "kube-api-access-plqxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961333 4754 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e04a9d-c744-44f4-9a1c-3f9d93263389-tmp\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961482 4754 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961509 4754 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961528 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqxs\" (UniqueName: \"kubernetes.io/projected/37e04a9d-c744-44f4-9a1c-3f9d93263389-kube-api-access-plqxs\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961554 4754 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:26 crc kubenswrapper[4754]: I0105 20:20:26.961574 4754 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/37e04a9d-c744-44f4-9a1c-3f9d93263389-collector-token\") on node \"crc\" DevicePath \"\"" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.814636 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pltvd" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.884631 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pltvd"] Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.898383 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-pltvd"] Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.907994 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-64f4t"] Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.909283 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.914926 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.915319 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.916013 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.916012 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qt8hp" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.916286 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.923225 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.928481 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-64f4t"] Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982651 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-trusted-ca\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982723 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-metrics\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982760 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982810 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c49d5d9c-150d-41cf-8fa3-a4484867b841-tmp\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982861 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-entrypoint\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982906 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-kube-api-access-btm4c\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-syslog-receiver\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982970 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.982996 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c49d5d9c-150d-41cf-8fa3-a4484867b841-datadir\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.983017 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config-openshift-service-cacrt\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:27 crc kubenswrapper[4754]: I0105 20:20:27.983038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-sa-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.084810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.084907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c49d5d9c-150d-41cf-8fa3-a4484867b841-datadir\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.084944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config-openshift-service-cacrt\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.085009 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-sa-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.085086 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c49d5d9c-150d-41cf-8fa3-a4484867b841-datadir\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.086367 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config-openshift-service-cacrt\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.086614 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-trusted-ca\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.087842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-trusted-ca\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-metrics\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089660 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089730 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c49d5d9c-150d-41cf-8fa3-a4484867b841-tmp\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089663 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089865 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-entrypoint\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.089974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-kube-api-access-btm4c\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.090012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-syslog-receiver\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.090388 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-config\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.090986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c49d5d9c-150d-41cf-8fa3-a4484867b841-entrypoint\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.093455 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-metrics\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.093560 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c49d5d9c-150d-41cf-8fa3-a4484867b841-collector-syslog-receiver\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.097199 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c49d5d9c-150d-41cf-8fa3-a4484867b841-tmp\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.118342 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-sa-token\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.123619 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/c49d5d9c-150d-41cf-8fa3-a4484867b841-kube-api-access-btm4c\") pod \"collector-64f4t\" (UID: \"c49d5d9c-150d-41cf-8fa3-a4484867b841\") " pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.251875 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-64f4t" Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.779375 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-64f4t"] Jan 05 20:20:28 crc kubenswrapper[4754]: I0105 20:20:28.826107 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-64f4t" event={"ID":"c49d5d9c-150d-41cf-8fa3-a4484867b841","Type":"ContainerStarted","Data":"d1d99505ea0c5c343b3e6b100797cf3865d2ec3f0a174fd1e07b20a368628372"} Jan 05 20:20:29 crc kubenswrapper[4754]: I0105 20:20:29.605796 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e04a9d-c744-44f4-9a1c-3f9d93263389" path="/var/lib/kubelet/pods/37e04a9d-c744-44f4-9a1c-3f9d93263389/volumes" Jan 05 20:20:36 crc kubenswrapper[4754]: I0105 20:20:36.911552 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-64f4t" event={"ID":"c49d5d9c-150d-41cf-8fa3-a4484867b841","Type":"ContainerStarted","Data":"4b9005acd80f50b4cf1e2ad0998d1daac891bff960f12204bb468ac1ea5978ca"} Jan 05 20:20:36 crc kubenswrapper[4754]: I0105 20:20:36.961969 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-64f4t" podStartSLOduration=2.8706522850000002 podStartE2EDuration="9.961935152s" podCreationTimestamp="2026-01-05 20:20:27 +0000 UTC" firstStartedPulling="2026-01-05 20:20:28.787329485 +0000 UTC m=+915.496513379" lastFinishedPulling="2026-01-05 20:20:35.878612372 +0000 UTC m=+922.587796246" observedRunningTime="2026-01-05 20:20:36.947530315 +0000 UTC m=+923.656714199" watchObservedRunningTime="2026-01-05 20:20:36.961935152 +0000 UTC m=+923.671119036" Jan 05 20:20:48 crc kubenswrapper[4754]: I0105 20:20:48.109359 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:20:48 crc kubenswrapper[4754]: I0105 20:20:48.110206 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.313947 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5"] Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.319017 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.326368 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.333577 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5"] Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.429909 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxxz\" (UniqueName: \"kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.430653 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.431139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.533069 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxxz\" (UniqueName: \"kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.533188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.533257 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.533986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.534076 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.557383 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxxz\" (UniqueName: \"kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:09 crc kubenswrapper[4754]: I0105 20:21:09.668954 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:10 crc kubenswrapper[4754]: I0105 20:21:10.227831 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5"] Jan 05 20:21:10 crc kubenswrapper[4754]: I0105 20:21:10.242146 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" event={"ID":"cf605837-8d65-4b71-be63-362acdce07b5","Type":"ContainerStarted","Data":"4cb7fdcb80e84b488bb6d3fa42fae75e3be5f6f64e434532c3f73e6f09e609e1"} Jan 05 20:21:11 crc kubenswrapper[4754]: I0105 20:21:11.251725 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf605837-8d65-4b71-be63-362acdce07b5" containerID="2b46a4d176542a84984cef0d70b3f18d18818d6d16460a52c32dab42c5eb7c40" exitCode=0 Jan 05 20:21:11 crc kubenswrapper[4754]: I0105 20:21:11.252060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" event={"ID":"cf605837-8d65-4b71-be63-362acdce07b5","Type":"ContainerDied","Data":"2b46a4d176542a84984cef0d70b3f18d18818d6d16460a52c32dab42c5eb7c40"} Jan 05 20:21:13 crc kubenswrapper[4754]: I0105 20:21:13.274652 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf605837-8d65-4b71-be63-362acdce07b5" containerID="78874ff2ca5afb1589d0cedb55473ddba125b7b3ef7893728bab74ac9f3a1d1f" exitCode=0 Jan 05 20:21:13 crc kubenswrapper[4754]: I0105 20:21:13.274725 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" event={"ID":"cf605837-8d65-4b71-be63-362acdce07b5","Type":"ContainerDied","Data":"78874ff2ca5afb1589d0cedb55473ddba125b7b3ef7893728bab74ac9f3a1d1f"} Jan 05 20:21:14 crc kubenswrapper[4754]: I0105 20:21:14.292361 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf605837-8d65-4b71-be63-362acdce07b5" containerID="37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c" exitCode=0 Jan 05 20:21:14 crc kubenswrapper[4754]: I0105 20:21:14.292802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" event={"ID":"cf605837-8d65-4b71-be63-362acdce07b5","Type":"ContainerDied","Data":"37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c"} Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.768268 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.884790 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle\") pod \"cf605837-8d65-4b71-be63-362acdce07b5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.885065 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxxz\" (UniqueName: \"kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz\") pod \"cf605837-8d65-4b71-be63-362acdce07b5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.885132 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util\") pod \"cf605837-8d65-4b71-be63-362acdce07b5\" (UID: \"cf605837-8d65-4b71-be63-362acdce07b5\") " Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.885539 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle" (OuterVolumeSpecName: "bundle") pod "cf605837-8d65-4b71-be63-362acdce07b5" (UID: "cf605837-8d65-4b71-be63-362acdce07b5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.896231 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util" (OuterVolumeSpecName: "util") pod "cf605837-8d65-4b71-be63-362acdce07b5" (UID: "cf605837-8d65-4b71-be63-362acdce07b5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.902120 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz" (OuterVolumeSpecName: "kube-api-access-fmxxz") pod "cf605837-8d65-4b71-be63-362acdce07b5" (UID: "cf605837-8d65-4b71-be63-362acdce07b5"). InnerVolumeSpecName "kube-api-access-fmxxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.987583 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.987644 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmxxz\" (UniqueName: \"kubernetes.io/projected/cf605837-8d65-4b71-be63-362acdce07b5-kube-api-access-fmxxz\") on node \"crc\" DevicePath \"\"" Jan 05 20:21:15 crc kubenswrapper[4754]: I0105 20:21:15.987666 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf605837-8d65-4b71-be63-362acdce07b5-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:21:16 crc kubenswrapper[4754]: I0105 20:21:16.314500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" event={"ID":"cf605837-8d65-4b71-be63-362acdce07b5","Type":"ContainerDied","Data":"4cb7fdcb80e84b488bb6d3fa42fae75e3be5f6f64e434532c3f73e6f09e609e1"} Jan 05 20:21:16 crc kubenswrapper[4754]: I0105 20:21:16.314556 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb7fdcb80e84b488bb6d3fa42fae75e3be5f6f64e434532c3f73e6f09e609e1" Jan 05 20:21:16 crc kubenswrapper[4754]: I0105 20:21:16.314610 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5" Jan 05 20:21:16 crc kubenswrapper[4754]: E0105 20:21:16.857240 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.109660 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.109745 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.109800 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.110630 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.110700 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760" gracePeriod=600 Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.334224 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760" exitCode=0 Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.334373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760"} Jan 05 20:21:18 crc kubenswrapper[4754]: I0105 20:21:18.334683 4754 scope.go:117] "RemoveContainer" containerID="1e98f521ad5f4e6f85b963d7fa920ca62316ae5085f0399160b8942987ac9d7a" Jan 05 20:21:18 crc kubenswrapper[4754]: E0105 20:21:18.629677 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.229761 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-xp7lv"] Jan 05 20:21:19 crc kubenswrapper[4754]: E0105 20:21:19.230343 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="pull" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.230357 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="pull" Jan 05 20:21:19 crc kubenswrapper[4754]: E0105 20:21:19.230382 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="util" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.230389 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="util" Jan 05 20:21:19 crc kubenswrapper[4754]: E0105 20:21:19.230399 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="extract" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.230405 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="extract" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.230531 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf605837-8d65-4b71-be63-362acdce07b5" containerName="extract" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.231069 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.233136 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.233253 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.234969 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vjb7q" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.247781 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-xp7lv"] Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.345547 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919"} Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.357144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxjq\" (UniqueName: \"kubernetes.io/projected/656675a9-7aaf-4104-afb7-0221062cf486-kube-api-access-bcxjq\") pod \"nmstate-operator-6769fb99d-xp7lv\" (UID: \"656675a9-7aaf-4104-afb7-0221062cf486\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.458721 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxjq\" (UniqueName: \"kubernetes.io/projected/656675a9-7aaf-4104-afb7-0221062cf486-kube-api-access-bcxjq\") pod \"nmstate-operator-6769fb99d-xp7lv\" (UID: \"656675a9-7aaf-4104-afb7-0221062cf486\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.481233 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxjq\" (UniqueName: \"kubernetes.io/projected/656675a9-7aaf-4104-afb7-0221062cf486-kube-api-access-bcxjq\") pod \"nmstate-operator-6769fb99d-xp7lv\" (UID: \"656675a9-7aaf-4104-afb7-0221062cf486\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" Jan 05 20:21:19 crc kubenswrapper[4754]: I0105 20:21:19.548371 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" Jan 05 20:21:20 crc kubenswrapper[4754]: I0105 20:21:20.018698 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-xp7lv"] Jan 05 20:21:20 crc kubenswrapper[4754]: W0105 20:21:20.028572 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656675a9_7aaf_4104_afb7_0221062cf486.slice/crio-a954ed40d00d65c23f9e7fe63ffe9a6704a8a2e0bfa5f5739d00aaab72eac080 WatchSource:0}: Error finding container a954ed40d00d65c23f9e7fe63ffe9a6704a8a2e0bfa5f5739d00aaab72eac080: Status 404 returned error can't find the container with id a954ed40d00d65c23f9e7fe63ffe9a6704a8a2e0bfa5f5739d00aaab72eac080 Jan 05 20:21:20 crc kubenswrapper[4754]: I0105 20:21:20.354652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" event={"ID":"656675a9-7aaf-4104-afb7-0221062cf486","Type":"ContainerStarted","Data":"a954ed40d00d65c23f9e7fe63ffe9a6704a8a2e0bfa5f5739d00aaab72eac080"} Jan 05 20:21:22 crc kubenswrapper[4754]: I0105 20:21:22.375282 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" event={"ID":"656675a9-7aaf-4104-afb7-0221062cf486","Type":"ContainerStarted","Data":"5aed3d9cfd5342eeed253f2f9545294d9ec5187843ff7ab5eba3cb7350fe49db"} Jan 05 20:21:22 crc kubenswrapper[4754]: I0105 20:21:22.404818 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-xp7lv" podStartSLOduration=1.460387345 podStartE2EDuration="3.404786956s" podCreationTimestamp="2026-01-05 20:21:19 +0000 UTC" firstStartedPulling="2026-01-05 20:21:20.030953323 +0000 UTC m=+966.740137197" lastFinishedPulling="2026-01-05 20:21:21.975352924 +0000 UTC m=+968.684536808" observedRunningTime="2026-01-05 20:21:22.395157994 +0000 UTC m=+969.104341908" watchObservedRunningTime="2026-01-05 20:21:22.404786956 +0000 UTC m=+969.113970860" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.376659 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.378447 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.385927 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cx7fr" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.392599 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.393733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.395611 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.406704 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qcjf8"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.407719 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.427517 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.434369 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.530749 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnkh\" (UniqueName: \"kubernetes.io/projected/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-kube-api-access-mcnkh\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.530826 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52b7\" (UniqueName: \"kubernetes.io/projected/7532804d-ccc6-4ba2-8803-ed9654864ad0-kube-api-access-k52b7\") pod \"nmstate-metrics-7f7f7578db-hh5sg\" (UID: \"7532804d-ccc6-4ba2-8803-ed9654864ad0\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.530855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-ovs-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.530879 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-dbus-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.530902 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.531171 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56c4\" (UniqueName: \"kubernetes.io/projected/957087f5-55fd-4a40-a01c-f96bf31dacf8-kube-api-access-r56c4\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.531347 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-nmstate-lock\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.554627 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.555616 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.557981 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ss4jb" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.558246 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.560224 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.574808 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92l6\" (UniqueName: \"kubernetes.io/projected/d032a820-661f-4cd1-840e-fd0603d8e1b7-kube-api-access-h92l6\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632888 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnkh\" (UniqueName: \"kubernetes.io/projected/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-kube-api-access-mcnkh\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632922 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d032a820-661f-4cd1-840e-fd0603d8e1b7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k52b7\" (UniqueName: \"kubernetes.io/projected/7532804d-ccc6-4ba2-8803-ed9654864ad0-kube-api-access-k52b7\") pod \"nmstate-metrics-7f7f7578db-hh5sg\" (UID: \"7532804d-ccc6-4ba2-8803-ed9654864ad0\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632975 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.632994 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-ovs-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-dbus-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633032 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56c4\" (UniqueName: \"kubernetes.io/projected/957087f5-55fd-4a40-a01c-f96bf31dacf8-kube-api-access-r56c4\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633094 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-nmstate-lock\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-nmstate-lock\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633596 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-ovs-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.633821 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/957087f5-55fd-4a40-a01c-f96bf31dacf8-dbus-socket\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: E0105 20:21:23.633875 4754 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 05 20:21:23 crc kubenswrapper[4754]: E0105 20:21:23.633916 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair podName:eb9a96b4-392b-43f5-ad59-4e7cd4171f33 nodeName:}" failed. No retries permitted until 2026-01-05 20:21:24.133901293 +0000 UTC m=+970.843085167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair") pod "nmstate-webhook-f8fb84555-ht8zq" (UID: "eb9a96b4-392b-43f5-ad59-4e7cd4171f33") : secret "openshift-nmstate-webhook" not found Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.657331 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k52b7\" (UniqueName: \"kubernetes.io/projected/7532804d-ccc6-4ba2-8803-ed9654864ad0-kube-api-access-k52b7\") pod \"nmstate-metrics-7f7f7578db-hh5sg\" (UID: \"7532804d-ccc6-4ba2-8803-ed9654864ad0\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.658085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnkh\" (UniqueName: \"kubernetes.io/projected/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-kube-api-access-mcnkh\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.658483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56c4\" (UniqueName: \"kubernetes.io/projected/957087f5-55fd-4a40-a01c-f96bf31dacf8-kube-api-access-r56c4\") pod \"nmstate-handler-qcjf8\" (UID: \"957087f5-55fd-4a40-a01c-f96bf31dacf8\") " pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.696210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.733387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.735479 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92l6\" (UniqueName: \"kubernetes.io/projected/d032a820-661f-4cd1-840e-fd0603d8e1b7-kube-api-access-h92l6\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.735584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d032a820-661f-4cd1-840e-fd0603d8e1b7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.735639 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.736741 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d032a820-661f-4cd1-840e-fd0603d8e1b7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: E0105 20:21:23.736868 4754 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 05 20:21:23 crc kubenswrapper[4754]: E0105 20:21:23.736922 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert podName:d032a820-661f-4cd1-840e-fd0603d8e1b7 nodeName:}" failed. No retries permitted until 2026-01-05 20:21:24.236902709 +0000 UTC m=+970.946086583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-7kdcl" (UID: "d032a820-661f-4cd1-840e-fd0603d8e1b7") : secret "plugin-serving-cert" not found Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.762381 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92l6\" (UniqueName: \"kubernetes.io/projected/d032a820-661f-4cd1-840e-fd0603d8e1b7-kube-api-access-h92l6\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.773425 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.776059 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.788752 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.837557 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.837722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.837834 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.837945 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.838048 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq2h\" (UniqueName: \"kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.838144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.838258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939711 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksq2h\" (UniqueName: \"kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939808 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939867 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939908 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939928 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.939965 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.940954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.959339 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.959623 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:23 crc kubenswrapper[4754]: I0105 20:21:23.986063 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.027081 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.041648 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksq2h\" (UniqueName: \"kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.045843 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert\") pod \"console-6c886d9bd8-wmpb8\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.140655 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.146660 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.149905 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/eb9a96b4-392b-43f5-ad59-4e7cd4171f33-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-ht8zq\" (UID: \"eb9a96b4-392b-43f5-ad59-4e7cd4171f33\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.248195 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.252961 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032a820-661f-4cd1-840e-fd0603d8e1b7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-7kdcl\" (UID: \"d032a820-661f-4cd1-840e-fd0603d8e1b7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.315792 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.362360 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg"] Jan 05 20:21:24 crc kubenswrapper[4754]: W0105 20:21:24.386261 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7532804d_ccc6_4ba2_8803_ed9654864ad0.slice/crio-5b272d087a00e4f7e39fc4b1bcc9ef459728d3b226ecd2da596524d7da6b0033 WatchSource:0}: Error finding container 5b272d087a00e4f7e39fc4b1bcc9ef459728d3b226ecd2da596524d7da6b0033: Status 404 returned error can't find the container with id 5b272d087a00e4f7e39fc4b1bcc9ef459728d3b226ecd2da596524d7da6b0033 Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.390872 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qcjf8" event={"ID":"957087f5-55fd-4a40-a01c-f96bf31dacf8","Type":"ContainerStarted","Data":"aacaf0c6da830d21e1081c162245168f8b7170532d0ff244058a5005fb2d3dad"} Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.470926 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.555804 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:21:24 crc kubenswrapper[4754]: W0105 20:21:24.572932 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74b103ef_1d58_4bb5_80b5_8314ec1df2bc.slice/crio-2911afcb39e72e918dc36cd7479170865cfc9f720f2715eb92c16e1073a42b94 WatchSource:0}: Error finding container 2911afcb39e72e918dc36cd7479170865cfc9f720f2715eb92c16e1073a42b94: Status 404 returned error can't find the container with id 2911afcb39e72e918dc36cd7479170865cfc9f720f2715eb92c16e1073a42b94 Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.741358 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq"] Jan 05 20:21:24 crc kubenswrapper[4754]: W0105 20:21:24.750913 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb9a96b4_392b_43f5_ad59_4e7cd4171f33.slice/crio-270eabcc75323667c6073b81c175899755b9123d87465393e6cc213482668034 WatchSource:0}: Error finding container 270eabcc75323667c6073b81c175899755b9123d87465393e6cc213482668034: Status 404 returned error can't find the container with id 270eabcc75323667c6073b81c175899755b9123d87465393e6cc213482668034 Jan 05 20:21:24 crc kubenswrapper[4754]: I0105 20:21:24.955454 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl"] Jan 05 20:21:24 crc kubenswrapper[4754]: W0105 20:21:24.962021 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd032a820_661f_4cd1_840e_fd0603d8e1b7.slice/crio-f902368dba348b1d68ea247a13b3e5a63b2acd94f0f05ee09de5c75b97951495 WatchSource:0}: Error finding container f902368dba348b1d68ea247a13b3e5a63b2acd94f0f05ee09de5c75b97951495: Status 404 returned error can't find the container with id f902368dba348b1d68ea247a13b3e5a63b2acd94f0f05ee09de5c75b97951495 Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.401573 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" event={"ID":"7532804d-ccc6-4ba2-8803-ed9654864ad0","Type":"ContainerStarted","Data":"5b272d087a00e4f7e39fc4b1bcc9ef459728d3b226ecd2da596524d7da6b0033"} Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.403608 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c886d9bd8-wmpb8" event={"ID":"74b103ef-1d58-4bb5-80b5-8314ec1df2bc","Type":"ContainerStarted","Data":"e7853d01eee293844b9d35c90c8865217e74da99fb8eb6c18391d080dd63af93"} Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.403647 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c886d9bd8-wmpb8" event={"ID":"74b103ef-1d58-4bb5-80b5-8314ec1df2bc","Type":"ContainerStarted","Data":"2911afcb39e72e918dc36cd7479170865cfc9f720f2715eb92c16e1073a42b94"} Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.406734 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" event={"ID":"eb9a96b4-392b-43f5-ad59-4e7cd4171f33","Type":"ContainerStarted","Data":"270eabcc75323667c6073b81c175899755b9123d87465393e6cc213482668034"} Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.407803 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" event={"ID":"d032a820-661f-4cd1-840e-fd0603d8e1b7","Type":"ContainerStarted","Data":"f902368dba348b1d68ea247a13b3e5a63b2acd94f0f05ee09de5c75b97951495"} Jan 05 20:21:25 crc kubenswrapper[4754]: I0105 20:21:25.429419 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c886d9bd8-wmpb8" podStartSLOduration=2.429399135 podStartE2EDuration="2.429399135s" podCreationTimestamp="2026-01-05 20:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:21:25.422838963 +0000 UTC m=+972.132022837" watchObservedRunningTime="2026-01-05 20:21:25.429399135 +0000 UTC m=+972.138583019" Jan 05 20:21:27 crc kubenswrapper[4754]: E0105 20:21:27.175886 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.425664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qcjf8" event={"ID":"957087f5-55fd-4a40-a01c-f96bf31dacf8","Type":"ContainerStarted","Data":"7f0110fbfe477c87819a1a35afda00cb7e96292d3977aa5c60031ffd6ae03486"} Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.426242 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.428210 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" event={"ID":"eb9a96b4-392b-43f5-ad59-4e7cd4171f33","Type":"ContainerStarted","Data":"da12b590b3927435cf0936908a1097dadfb66b0cb7d80ad03d76e8db6bb51dc3"} Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.428351 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.431611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" event={"ID":"7532804d-ccc6-4ba2-8803-ed9654864ad0","Type":"ContainerStarted","Data":"f23e097879e1fa99d3f5fddf9bb58431014668af2a84ac932a8b37e81c1ba779"} Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.449489 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qcjf8" podStartSLOduration=1.250986236 podStartE2EDuration="4.449463827s" podCreationTimestamp="2026-01-05 20:21:23 +0000 UTC" firstStartedPulling="2026-01-05 20:21:23.825733774 +0000 UTC m=+970.534917648" lastFinishedPulling="2026-01-05 20:21:27.024211365 +0000 UTC m=+973.733395239" observedRunningTime="2026-01-05 20:21:27.445871573 +0000 UTC m=+974.155055457" watchObservedRunningTime="2026-01-05 20:21:27.449463827 +0000 UTC m=+974.158647701" Jan 05 20:21:27 crc kubenswrapper[4754]: I0105 20:21:27.472914 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" podStartSLOduration=2.20726113 podStartE2EDuration="4.472893431s" podCreationTimestamp="2026-01-05 20:21:23 +0000 UTC" firstStartedPulling="2026-01-05 20:21:24.757178397 +0000 UTC m=+971.466362271" lastFinishedPulling="2026-01-05 20:21:27.022810698 +0000 UTC m=+973.731994572" observedRunningTime="2026-01-05 20:21:27.470487558 +0000 UTC m=+974.179671432" watchObservedRunningTime="2026-01-05 20:21:27.472893431 +0000 UTC m=+974.182077305" Jan 05 20:21:29 crc kubenswrapper[4754]: I0105 20:21:29.456436 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" event={"ID":"d032a820-661f-4cd1-840e-fd0603d8e1b7","Type":"ContainerStarted","Data":"54d916876f0ce2be24619b32eadff419e308e4cc88671227f2ed2d6163e379cd"} Jan 05 20:21:29 crc kubenswrapper[4754]: I0105 20:21:29.478957 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-7kdcl" podStartSLOduration=2.776463701 podStartE2EDuration="6.478929795s" podCreationTimestamp="2026-01-05 20:21:23 +0000 UTC" firstStartedPulling="2026-01-05 20:21:24.966902958 +0000 UTC m=+971.676086842" lastFinishedPulling="2026-01-05 20:21:28.669369072 +0000 UTC m=+975.378552936" observedRunningTime="2026-01-05 20:21:29.477420305 +0000 UTC m=+976.186604229" watchObservedRunningTime="2026-01-05 20:21:29.478929795 +0000 UTC m=+976.188113679" Jan 05 20:21:30 crc kubenswrapper[4754]: I0105 20:21:30.470559 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" event={"ID":"7532804d-ccc6-4ba2-8803-ed9654864ad0","Type":"ContainerStarted","Data":"d1c29329472be10d03a4991dbf8730bbfcf4846875ab2b256dbd0258fe4c8574"} Jan 05 20:21:30 crc kubenswrapper[4754]: I0105 20:21:30.504365 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-hh5sg" podStartSLOduration=1.887066519 podStartE2EDuration="7.504334369s" podCreationTimestamp="2026-01-05 20:21:23 +0000 UTC" firstStartedPulling="2026-01-05 20:21:24.390315864 +0000 UTC m=+971.099499738" lastFinishedPulling="2026-01-05 20:21:30.007583714 +0000 UTC m=+976.716767588" observedRunningTime="2026-01-05 20:21:30.493127675 +0000 UTC m=+977.202311589" watchObservedRunningTime="2026-01-05 20:21:30.504334369 +0000 UTC m=+977.213518283" Jan 05 20:21:33 crc kubenswrapper[4754]: I0105 20:21:33.777714 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qcjf8" Jan 05 20:21:33 crc kubenswrapper[4754]: E0105 20:21:33.875573 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:34 crc kubenswrapper[4754]: I0105 20:21:34.140938 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:34 crc kubenswrapper[4754]: I0105 20:21:34.141001 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:34 crc kubenswrapper[4754]: I0105 20:21:34.151349 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:34 crc kubenswrapper[4754]: I0105 20:21:34.517318 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:21:34 crc kubenswrapper[4754]: I0105 20:21:34.607429 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:21:37 crc kubenswrapper[4754]: E0105 20:21:37.267046 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:44 crc kubenswrapper[4754]: I0105 20:21:44.325748 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" Jan 05 20:21:47 crc kubenswrapper[4754]: E0105 20:21:47.483321 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:48 crc kubenswrapper[4754]: E0105 20:21:48.105911 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:48 crc kubenswrapper[4754]: E0105 20:21:48.107945 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:48 crc kubenswrapper[4754]: E0105 20:21:48.625095 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:57 crc kubenswrapper[4754]: E0105 20:21:57.703348 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:21:59 crc kubenswrapper[4754]: I0105 20:21:59.647770 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7c45669d6f-99bq7" podUID="f7928ad3-b615-4da0-a301-7d74c2802904" containerName="console" containerID="cri-o://a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0" gracePeriod=15 Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.142233 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c45669d6f-99bq7_f7928ad3-b615-4da0-a301-7d74c2802904/console/0.log" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.142737 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290614 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290705 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r52h\" (UniqueName: \"kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290738 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290775 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290903 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290926 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.290958 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert\") pod \"f7928ad3-b615-4da0-a301-7d74c2802904\" (UID: \"f7928ad3-b615-4da0-a301-7d74c2802904\") " Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.292606 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.292642 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.292671 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca" (OuterVolumeSpecName: "service-ca") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.292694 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config" (OuterVolumeSpecName: "console-config") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.298378 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.299967 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.299985 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h" (OuterVolumeSpecName: "kube-api-access-5r52h") pod "f7928ad3-b615-4da0-a301-7d74c2802904" (UID: "f7928ad3-b615-4da0-a301-7d74c2802904"). InnerVolumeSpecName "kube-api-access-5r52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393384 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393435 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393452 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393464 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393477 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f7928ad3-b615-4da0-a301-7d74c2802904-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393488 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r52h\" (UniqueName: \"kubernetes.io/projected/f7928ad3-b615-4da0-a301-7d74c2802904-kube-api-access-5r52h\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.393501 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f7928ad3-b615-4da0-a301-7d74c2802904-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.764660 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c45669d6f-99bq7_f7928ad3-b615-4da0-a301-7d74c2802904/console/0.log" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.765234 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7928ad3-b615-4da0-a301-7d74c2802904" containerID="a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0" exitCode=2 Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.765368 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c45669d6f-99bq7" event={"ID":"f7928ad3-b615-4da0-a301-7d74c2802904","Type":"ContainerDied","Data":"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0"} Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.765393 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c45669d6f-99bq7" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.765435 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c45669d6f-99bq7" event={"ID":"f7928ad3-b615-4da0-a301-7d74c2802904","Type":"ContainerDied","Data":"506bca2cc45e23fe62d22ce7d30a50d5c9e071ec9009b0b6fe66d613dceabf3e"} Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.765471 4754 scope.go:117] "RemoveContainer" containerID="a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.806723 4754 scope.go:117] "RemoveContainer" containerID="a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0" Jan 05 20:22:00 crc kubenswrapper[4754]: E0105 20:22:00.809999 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0\": container with ID starting with a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0 not found: ID does not exist" containerID="a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.810067 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0"} err="failed to get container status \"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0\": rpc error: code = NotFound desc = could not find container \"a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0\": container with ID starting with a07051b386d5b69a0ceb3f17576b8dece1fd256c9adf95b7d00e200c49a3bfd0 not found: ID does not exist" Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.816395 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:22:00 crc kubenswrapper[4754]: I0105 20:22:00.826818 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c45669d6f-99bq7"] Jan 05 20:22:01 crc kubenswrapper[4754]: I0105 20:22:01.602094 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7928ad3-b615-4da0-a301-7d74c2802904" path="/var/lib/kubelet/pods/f7928ad3-b615-4da0-a301-7d74c2802904/volumes" Jan 05 20:22:03 crc kubenswrapper[4754]: E0105 20:22:03.625063 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:22:07 crc kubenswrapper[4754]: E0105 20:22:07.880436 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-conmon-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf605837_8d65_4b71_be63_362acdce07b5.slice/crio-37ba47171650defb0498dd76534de6c26f2e0139a6bfbefd0d3a3457bbdc941c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.324239 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7"] Jan 05 20:22:08 crc kubenswrapper[4754]: E0105 20:22:08.325084 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7928ad3-b615-4da0-a301-7d74c2802904" containerName="console" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.325106 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7928ad3-b615-4da0-a301-7d74c2802904" containerName="console" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.325322 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7928ad3-b615-4da0-a301-7d74c2802904" containerName="console" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.327009 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.332145 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.368481 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7"] Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.398880 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.398958 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.399020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kx66\" (UniqueName: \"kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.500875 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.500952 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.500995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kx66\" (UniqueName: \"kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.501431 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.501480 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.546887 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kx66\" (UniqueName: \"kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:08 crc kubenswrapper[4754]: I0105 20:22:08.659005 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:09 crc kubenswrapper[4754]: I0105 20:22:09.133940 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7"] Jan 05 20:22:09 crc kubenswrapper[4754]: I0105 20:22:09.852042 4754 generic.go:334] "Generic (PLEG): container finished" podID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerID="cd7d11059e5a11c5900457628104e2c5b6768c4ff922d4f88dd827a19a8d07be" exitCode=0 Jan 05 20:22:09 crc kubenswrapper[4754]: I0105 20:22:09.852144 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" event={"ID":"d37d6c73-795d-4807-bbe3-ac09382c3f1c","Type":"ContainerDied","Data":"cd7d11059e5a11c5900457628104e2c5b6768c4ff922d4f88dd827a19a8d07be"} Jan 05 20:22:09 crc kubenswrapper[4754]: I0105 20:22:09.852637 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" event={"ID":"d37d6c73-795d-4807-bbe3-ac09382c3f1c","Type":"ContainerStarted","Data":"01c44e591ceadc356121ec9c643c7a0e4ba79e667ab85713f53462c1809356fe"} Jan 05 20:22:09 crc kubenswrapper[4754]: I0105 20:22:09.854600 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:22:11 crc kubenswrapper[4754]: I0105 20:22:11.874232 4754 generic.go:334] "Generic (PLEG): container finished" podID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerID="e1011d36075a5fb8eae22820c9a8d5fa9c5ff1583c4c9eac9861c75176c9fca7" exitCode=0 Jan 05 20:22:11 crc kubenswrapper[4754]: I0105 20:22:11.874308 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" event={"ID":"d37d6c73-795d-4807-bbe3-ac09382c3f1c","Type":"ContainerDied","Data":"e1011d36075a5fb8eae22820c9a8d5fa9c5ff1583c4c9eac9861c75176c9fca7"} Jan 05 20:22:12 crc kubenswrapper[4754]: I0105 20:22:12.886309 4754 generic.go:334] "Generic (PLEG): container finished" podID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerID="7c594a912d4790109da10058ec9ccda9017367af2f41fa48689b8151e2625733" exitCode=0 Jan 05 20:22:12 crc kubenswrapper[4754]: I0105 20:22:12.886405 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" event={"ID":"d37d6c73-795d-4807-bbe3-ac09382c3f1c","Type":"ContainerDied","Data":"7c594a912d4790109da10058ec9ccda9017367af2f41fa48689b8151e2625733"} Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.301546 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.313116 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle\") pod \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.314226 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle" (OuterVolumeSpecName: "bundle") pod "d37d6c73-795d-4807-bbe3-ac09382c3f1c" (UID: "d37d6c73-795d-4807-bbe3-ac09382c3f1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.314272 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util\") pod \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.314318 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kx66\" (UniqueName: \"kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66\") pod \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\" (UID: \"d37d6c73-795d-4807-bbe3-ac09382c3f1c\") " Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.315017 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.322009 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66" (OuterVolumeSpecName: "kube-api-access-5kx66") pod "d37d6c73-795d-4807-bbe3-ac09382c3f1c" (UID: "d37d6c73-795d-4807-bbe3-ac09382c3f1c"). InnerVolumeSpecName "kube-api-access-5kx66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.340383 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util" (OuterVolumeSpecName: "util") pod "d37d6c73-795d-4807-bbe3-ac09382c3f1c" (UID: "d37d6c73-795d-4807-bbe3-ac09382c3f1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.417333 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d37d6c73-795d-4807-bbe3-ac09382c3f1c-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.417416 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kx66\" (UniqueName: \"kubernetes.io/projected/d37d6c73-795d-4807-bbe3-ac09382c3f1c-kube-api-access-5kx66\") on node \"crc\" DevicePath \"\"" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.912409 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" event={"ID":"d37d6c73-795d-4807-bbe3-ac09382c3f1c","Type":"ContainerDied","Data":"01c44e591ceadc356121ec9c643c7a0e4ba79e667ab85713f53462c1809356fe"} Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.912478 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01c44e591ceadc356121ec9c643c7a0e4ba79e667ab85713f53462c1809356fe" Jan 05 20:22:14 crc kubenswrapper[4754]: I0105 20:22:14.912518 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.282499 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk"] Jan 05 20:22:23 crc kubenswrapper[4754]: E0105 20:22:23.283786 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="extract" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.283806 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="extract" Jan 05 20:22:23 crc kubenswrapper[4754]: E0105 20:22:23.283826 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="pull" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.283833 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="pull" Jan 05 20:22:23 crc kubenswrapper[4754]: E0105 20:22:23.283842 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="util" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.283849 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="util" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.284020 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37d6c73-795d-4807-bbe3-ac09382c3f1c" containerName="extract" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.284782 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.287975 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.288528 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.288632 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gg7mr" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.293915 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.295647 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.309784 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7kj\" (UniqueName: \"kubernetes.io/projected/823e1e7d-9555-4324-a7aa-6add85d4d9f3-kube-api-access-sn7kj\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.309907 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-apiservice-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.310230 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-webhook-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.355033 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk"] Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.411715 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-webhook-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.411779 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7kj\" (UniqueName: \"kubernetes.io/projected/823e1e7d-9555-4324-a7aa-6add85d4d9f3-kube-api-access-sn7kj\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.411814 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-apiservice-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.428188 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-webhook-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.428974 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/823e1e7d-9555-4324-a7aa-6add85d4d9f3-apiservice-cert\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.429481 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7kj\" (UniqueName: \"kubernetes.io/projected/823e1e7d-9555-4324-a7aa-6add85d4d9f3-kube-api-access-sn7kj\") pod \"metallb-operator-controller-manager-79f77fb8f4-2q9lk\" (UID: \"823e1e7d-9555-4324-a7aa-6add85d4d9f3\") " pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.522822 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h"] Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.523993 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.525955 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.526265 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.529565 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-z8gxz" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.548859 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h"] Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.604152 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.615346 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-webhook-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.615477 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfx7\" (UniqueName: \"kubernetes.io/projected/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-kube-api-access-hnfx7\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.615697 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-apiservice-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.717273 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfx7\" (UniqueName: \"kubernetes.io/projected/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-kube-api-access-hnfx7\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.717694 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-apiservice-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.717726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-webhook-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.723743 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-apiservice-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.732900 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-webhook-cert\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.753499 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfx7\" (UniqueName: \"kubernetes.io/projected/cdb2b1f2-eb13-466c-bc69-5cb4307eb695-kube-api-access-hnfx7\") pod \"metallb-operator-webhook-server-75f4999fb9-2ss2h\" (UID: \"cdb2b1f2-eb13-466c-bc69-5cb4307eb695\") " pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:23 crc kubenswrapper[4754]: I0105 20:22:23.855916 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:24 crc kubenswrapper[4754]: I0105 20:22:24.137865 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk"] Jan 05 20:22:24 crc kubenswrapper[4754]: W0105 20:22:24.351808 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb2b1f2_eb13_466c_bc69_5cb4307eb695.slice/crio-2dc80a873a69708c0ba333864982a0f55f64d6c52913d6b3e9a020fd3e6668a7 WatchSource:0}: Error finding container 2dc80a873a69708c0ba333864982a0f55f64d6c52913d6b3e9a020fd3e6668a7: Status 404 returned error can't find the container with id 2dc80a873a69708c0ba333864982a0f55f64d6c52913d6b3e9a020fd3e6668a7 Jan 05 20:22:24 crc kubenswrapper[4754]: I0105 20:22:24.354493 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h"] Jan 05 20:22:25 crc kubenswrapper[4754]: I0105 20:22:25.063736 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" event={"ID":"cdb2b1f2-eb13-466c-bc69-5cb4307eb695","Type":"ContainerStarted","Data":"2dc80a873a69708c0ba333864982a0f55f64d6c52913d6b3e9a020fd3e6668a7"} Jan 05 20:22:25 crc kubenswrapper[4754]: I0105 20:22:25.065381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" event={"ID":"823e1e7d-9555-4324-a7aa-6add85d4d9f3","Type":"ContainerStarted","Data":"8daa22982a22eab1989b234e3096e57d5bf641a29e8c0f41e585bfdac95f9c59"} Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.119930 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" event={"ID":"823e1e7d-9555-4324-a7aa-6add85d4d9f3","Type":"ContainerStarted","Data":"413cbf61be6bacef7d9f7b10f4cde04fce84347e4feaffff105035ed04057174"} Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.120620 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.122557 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" event={"ID":"cdb2b1f2-eb13-466c-bc69-5cb4307eb695","Type":"ContainerStarted","Data":"81c28013411ea4f9e919a148eb6c549c2e35d138321f3a6b737a8f99eb0c4435"} Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.122944 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.159219 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" podStartSLOduration=1.903587575 podStartE2EDuration="8.159183176s" podCreationTimestamp="2026-01-05 20:22:23 +0000 UTC" firstStartedPulling="2026-01-05 20:22:24.157885334 +0000 UTC m=+1030.867069208" lastFinishedPulling="2026-01-05 20:22:30.413480935 +0000 UTC m=+1037.122664809" observedRunningTime="2026-01-05 20:22:31.142697715 +0000 UTC m=+1037.851881629" watchObservedRunningTime="2026-01-05 20:22:31.159183176 +0000 UTC m=+1037.868367090" Jan 05 20:22:31 crc kubenswrapper[4754]: I0105 20:22:31.176133 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" podStartSLOduration=2.097251836 podStartE2EDuration="8.176101149s" podCreationTimestamp="2026-01-05 20:22:23 +0000 UTC" firstStartedPulling="2026-01-05 20:22:24.356955416 +0000 UTC m=+1031.066139290" lastFinishedPulling="2026-01-05 20:22:30.435804729 +0000 UTC m=+1037.144988603" observedRunningTime="2026-01-05 20:22:31.16772346 +0000 UTC m=+1037.876907374" watchObservedRunningTime="2026-01-05 20:22:31.176101149 +0000 UTC m=+1037.885285063" Jan 05 20:22:43 crc kubenswrapper[4754]: I0105 20:22:43.869649 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" Jan 05 20:23:03 crc kubenswrapper[4754]: I0105 20:23:03.607090 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.537693 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gkzx4"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.548705 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.553319 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.554614 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.554825 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8s5kp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.554941 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.555596 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.560379 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.564099 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.640949 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-z6whp"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.646878 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.651158 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.651441 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.651603 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.652001 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wnspn" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.651598 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-b62cb"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-sockets\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653089 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76t6p\" (UniqueName: \"kubernetes.io/projected/c584577b-8f80-4506-9fa5-3f8e9df40f02-kube-api-access-76t6p\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653156 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-startup\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653194 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics-certs\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653226 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvn66\" (UniqueName: \"kubernetes.io/projected/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-kube-api-access-xvn66\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653283 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653327 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-reloader\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-conf\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.653597 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.659340 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.668996 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-b62cb"] Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.755745 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvn66\" (UniqueName: \"kubernetes.io/projected/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-kube-api-access-xvn66\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.755823 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metallb-excludel2\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.755900 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.755934 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-reloader\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756095 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-conf\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756135 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-metrics-certs\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756167 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756196 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-sockets\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756243 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756262 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76t6p\" (UniqueName: \"kubernetes.io/projected/c584577b-8f80-4506-9fa5-3f8e9df40f02-kube-api-access-76t6p\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756281 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-cert\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-startup\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756353 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gj9\" (UniqueName: \"kubernetes.io/projected/2f15c30e-3828-471c-8e71-3573735397a1-kube-api-access-j6gj9\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2zm\" (UniqueName: \"kubernetes.io/projected/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-kube-api-access-wj2zm\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756406 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics-certs\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.756865 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-reloader\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.757063 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.757171 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-sockets\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.757320 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-conf\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.759096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c584577b-8f80-4506-9fa5-3f8e9df40f02-frr-startup\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.767734 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c584577b-8f80-4506-9fa5-3f8e9df40f02-metrics-certs\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.774862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvn66\" (UniqueName: \"kubernetes.io/projected/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-kube-api-access-xvn66\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.775945 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76t6p\" (UniqueName: \"kubernetes.io/projected/c584577b-8f80-4506-9fa5-3f8e9df40f02-kube-api-access-76t6p\") pod \"frr-k8s-gkzx4\" (UID: \"c584577b-8f80-4506-9fa5-3f8e9df40f02\") " pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.777606 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b800ad7-0ece-4722-9ad1-e20a2b5c7d42-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-f7vfs\" (UID: \"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858219 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-metrics-certs\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858620 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858654 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858682 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-cert\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858711 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gj9\" (UniqueName: \"kubernetes.io/projected/2f15c30e-3828-471c-8e71-3573735397a1-kube-api-access-j6gj9\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858728 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2zm\" (UniqueName: \"kubernetes.io/projected/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-kube-api-access-wj2zm\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.858759 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metallb-excludel2\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: E0105 20:23:04.859026 4754 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 05 20:23:04 crc kubenswrapper[4754]: E0105 20:23:04.859109 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs podName:8118d4b3-34f3-49b4-ab29-1a2b17adacfb nodeName:}" failed. No retries permitted until 2026-01-05 20:23:05.359088361 +0000 UTC m=+1072.068272235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs") pod "speaker-z6whp" (UID: "8118d4b3-34f3-49b4-ab29-1a2b17adacfb") : secret "speaker-certs-secret" not found Jan 05 20:23:04 crc kubenswrapper[4754]: E0105 20:23:04.859226 4754 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 05 20:23:04 crc kubenswrapper[4754]: E0105 20:23:04.859265 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist podName:8118d4b3-34f3-49b4-ab29-1a2b17adacfb nodeName:}" failed. No retries permitted until 2026-01-05 20:23:05.359254825 +0000 UTC m=+1072.068438699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist") pod "speaker-z6whp" (UID: "8118d4b3-34f3-49b4-ab29-1a2b17adacfb") : secret "metallb-memberlist" not found Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.859849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metallb-excludel2\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.864391 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-metrics-certs\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.867792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f15c30e-3828-471c-8e71-3573735397a1-cert\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.878029 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gj9\" (UniqueName: \"kubernetes.io/projected/2f15c30e-3828-471c-8e71-3573735397a1-kube-api-access-j6gj9\") pod \"controller-5bddd4b946-b62cb\" (UID: \"2f15c30e-3828-471c-8e71-3573735397a1\") " pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.888163 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2zm\" (UniqueName: \"kubernetes.io/projected/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-kube-api-access-wj2zm\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.903054 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.918977 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:04 crc kubenswrapper[4754]: I0105 20:23:04.988633 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.256823 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-b62cb"] Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.368271 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.368329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:05 crc kubenswrapper[4754]: E0105 20:23:05.368472 4754 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 05 20:23:05 crc kubenswrapper[4754]: E0105 20:23:05.368523 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist podName:8118d4b3-34f3-49b4-ab29-1a2b17adacfb nodeName:}" failed. No retries permitted until 2026-01-05 20:23:06.368509217 +0000 UTC m=+1073.077693091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist") pod "speaker-z6whp" (UID: "8118d4b3-34f3-49b4-ab29-1a2b17adacfb") : secret "metallb-memberlist" not found Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.373739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-metrics-certs\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.388196 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs"] Jan 05 20:23:05 crc kubenswrapper[4754]: W0105 20:23:05.392061 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b800ad7_0ece_4722_9ad1_e20a2b5c7d42.slice/crio-a0583776c3b737f8f072a872abd08bb31c19610f4df22f63f6098995bb0e91a4 WatchSource:0}: Error finding container a0583776c3b737f8f072a872abd08bb31c19610f4df22f63f6098995bb0e91a4: Status 404 returned error can't find the container with id a0583776c3b737f8f072a872abd08bb31c19610f4df22f63f6098995bb0e91a4 Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.520527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"6c46d492cb5f1cb5c5349373efd4e5f8f8a82a3a3634c0c66b1a26979fb4f65a"} Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.522136 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" event={"ID":"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42","Type":"ContainerStarted","Data":"a0583776c3b737f8f072a872abd08bb31c19610f4df22f63f6098995bb0e91a4"} Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.524226 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-b62cb" event={"ID":"2f15c30e-3828-471c-8e71-3573735397a1","Type":"ContainerStarted","Data":"ba1244a6d20af052e91f0c943ffd11d0a5e6fe4918b928b8e59ff886cc83d6ca"} Jan 05 20:23:05 crc kubenswrapper[4754]: I0105 20:23:05.524267 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-b62cb" event={"ID":"2f15c30e-3828-471c-8e71-3573735397a1","Type":"ContainerStarted","Data":"2f468fd60fe151a91bf6dd19f396926fc262b4d061a7794b58846db4c34630f4"} Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.391495 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.396891 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8118d4b3-34f3-49b4-ab29-1a2b17adacfb-memberlist\") pod \"speaker-z6whp\" (UID: \"8118d4b3-34f3-49b4-ab29-1a2b17adacfb\") " pod="metallb-system/speaker-z6whp" Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.477827 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z6whp" Jan 05 20:23:06 crc kubenswrapper[4754]: W0105 20:23:06.517239 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8118d4b3_34f3_49b4_ab29_1a2b17adacfb.slice/crio-08185b0e67ce8852e28cd0fe0ed83b9003d0323a831f3f39337763a6b54df78d WatchSource:0}: Error finding container 08185b0e67ce8852e28cd0fe0ed83b9003d0323a831f3f39337763a6b54df78d: Status 404 returned error can't find the container with id 08185b0e67ce8852e28cd0fe0ed83b9003d0323a831f3f39337763a6b54df78d Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.535418 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6whp" event={"ID":"8118d4b3-34f3-49b4-ab29-1a2b17adacfb","Type":"ContainerStarted","Data":"08185b0e67ce8852e28cd0fe0ed83b9003d0323a831f3f39337763a6b54df78d"} Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.539444 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-b62cb" event={"ID":"2f15c30e-3828-471c-8e71-3573735397a1","Type":"ContainerStarted","Data":"4f2fe266729b8f5a003e279a5e111391426cbc73d63acbd5af166760c29282cf"} Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.539742 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:06 crc kubenswrapper[4754]: I0105 20:23:06.566863 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-b62cb" podStartSLOduration=2.566830987 podStartE2EDuration="2.566830987s" podCreationTimestamp="2026-01-05 20:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:23:06.559130495 +0000 UTC m=+1073.268314409" watchObservedRunningTime="2026-01-05 20:23:06.566830987 +0000 UTC m=+1073.276014901" Jan 05 20:23:07 crc kubenswrapper[4754]: I0105 20:23:07.549334 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6whp" event={"ID":"8118d4b3-34f3-49b4-ab29-1a2b17adacfb","Type":"ContainerStarted","Data":"8a521d579e00651699f54f135635cd133267ec687638e7370d538172b7e09ac7"} Jan 05 20:23:07 crc kubenswrapper[4754]: I0105 20:23:07.549694 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6whp" event={"ID":"8118d4b3-34f3-49b4-ab29-1a2b17adacfb","Type":"ContainerStarted","Data":"c7f4da7936623719fb57b754d729c787677f1fbaa197d4097d4fe5f4ce366b03"} Jan 05 20:23:07 crc kubenswrapper[4754]: I0105 20:23:07.571141 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-z6whp" podStartSLOduration=3.5711243379999997 podStartE2EDuration="3.571124338s" podCreationTimestamp="2026-01-05 20:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:23:07.570020369 +0000 UTC m=+1074.279204263" watchObservedRunningTime="2026-01-05 20:23:07.571124338 +0000 UTC m=+1074.280308212" Jan 05 20:23:08 crc kubenswrapper[4754]: I0105 20:23:08.557091 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-z6whp" Jan 05 20:23:13 crc kubenswrapper[4754]: I0105 20:23:13.611203 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" event={"ID":"1b800ad7-0ece-4722-9ad1-e20a2b5c7d42","Type":"ContainerStarted","Data":"d1524489c26291a0d6f9d8b5a3809ad16337c7770f87ee35128ed0c9b3fc6f54"} Jan 05 20:23:13 crc kubenswrapper[4754]: I0105 20:23:13.613385 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:13 crc kubenswrapper[4754]: I0105 20:23:13.616413 4754 generic.go:334] "Generic (PLEG): container finished" podID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerID="5ad0416c68da34dea07550a3b368df2931e6da05ba3463ae7b06b3823bfa2611" exitCode=0 Jan 05 20:23:13 crc kubenswrapper[4754]: I0105 20:23:13.616477 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerDied","Data":"5ad0416c68da34dea07550a3b368df2931e6da05ba3463ae7b06b3823bfa2611"} Jan 05 20:23:13 crc kubenswrapper[4754]: I0105 20:23:13.660641 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" podStartSLOduration=2.243731668 podStartE2EDuration="9.66061434s" podCreationTimestamp="2026-01-05 20:23:04 +0000 UTC" firstStartedPulling="2026-01-05 20:23:05.394386654 +0000 UTC m=+1072.103570538" lastFinishedPulling="2026-01-05 20:23:12.811269306 +0000 UTC m=+1079.520453210" observedRunningTime="2026-01-05 20:23:13.657115339 +0000 UTC m=+1080.366299213" watchObservedRunningTime="2026-01-05 20:23:13.66061434 +0000 UTC m=+1080.369798244" Jan 05 20:23:14 crc kubenswrapper[4754]: I0105 20:23:14.632747 4754 generic.go:334] "Generic (PLEG): container finished" podID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerID="dd816548e72359d3ceb37eff69193b991e801e0f53a4282dffbb489eb6e08c23" exitCode=0 Jan 05 20:23:14 crc kubenswrapper[4754]: I0105 20:23:14.632814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerDied","Data":"dd816548e72359d3ceb37eff69193b991e801e0f53a4282dffbb489eb6e08c23"} Jan 05 20:23:15 crc kubenswrapper[4754]: I0105 20:23:15.648428 4754 generic.go:334] "Generic (PLEG): container finished" podID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerID="3cfc64cec25776c605713f52094c21ba9d1ba0ba8132f958a13dd48631f07810" exitCode=0 Jan 05 20:23:15 crc kubenswrapper[4754]: I0105 20:23:15.648553 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerDied","Data":"3cfc64cec25776c605713f52094c21ba9d1ba0ba8132f958a13dd48631f07810"} Jan 05 20:23:16 crc kubenswrapper[4754]: I0105 20:23:16.486559 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-z6whp" Jan 05 20:23:16 crc kubenswrapper[4754]: I0105 20:23:16.669703 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"0e3e1c0b6d7d36b7fa7663cb63c5b088556f6604bea598297c33757171b34741"} Jan 05 20:23:16 crc kubenswrapper[4754]: I0105 20:23:16.669778 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"b31ce003ed752775bb8b81fcddba7b6bf6b47544e121cf2b97eda2e99b960ef4"} Jan 05 20:23:16 crc kubenswrapper[4754]: I0105 20:23:16.669834 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"7424f2b937a87f1742fa2dc5ad2d6fb47863b4fd40ea6f006ea431d99f70583f"} Jan 05 20:23:16 crc kubenswrapper[4754]: I0105 20:23:16.669856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"453a1e95f42a23135edf2ceef0d26c233b4b57e3f6079a3dae26bf1125097624"} Jan 05 20:23:17 crc kubenswrapper[4754]: I0105 20:23:17.706502 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"f2c7a11006d0e4a4250e4f684d1460cdf585521ae150b4afd83db0f25b6d3541"} Jan 05 20:23:17 crc kubenswrapper[4754]: I0105 20:23:17.706990 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gkzx4" event={"ID":"c584577b-8f80-4506-9fa5-3f8e9df40f02","Type":"ContainerStarted","Data":"bab25bc311a9d7d865f935993637c149be728b724c9bd01262e1466c6e3f333b"} Jan 05 20:23:17 crc kubenswrapper[4754]: I0105 20:23:17.708164 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:17 crc kubenswrapper[4754]: I0105 20:23:17.759651 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gkzx4" podStartSLOduration=6.248666652 podStartE2EDuration="13.759627556s" podCreationTimestamp="2026-01-05 20:23:04 +0000 UTC" firstStartedPulling="2026-01-05 20:23:05.333450179 +0000 UTC m=+1072.042634053" lastFinishedPulling="2026-01-05 20:23:12.844411033 +0000 UTC m=+1079.553594957" observedRunningTime="2026-01-05 20:23:17.756761181 +0000 UTC m=+1084.465945145" watchObservedRunningTime="2026-01-05 20:23:17.759627556 +0000 UTC m=+1084.468811440" Jan 05 20:23:18 crc kubenswrapper[4754]: I0105 20:23:18.109639 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:23:18 crc kubenswrapper[4754]: I0105 20:23:18.109740 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.810283 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.811964 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.815200 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.815555 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zppsw" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.818165 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.840202 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.904344 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.942444 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:19 crc kubenswrapper[4754]: I0105 20:23:19.972831 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtnh\" (UniqueName: \"kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh\") pod \"openstack-operator-index-xvlln\" (UID: \"b820cd7c-38de-4f87-8251-c87915e53cb7\") " pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:20 crc kubenswrapper[4754]: I0105 20:23:20.074088 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtnh\" (UniqueName: \"kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh\") pod \"openstack-operator-index-xvlln\" (UID: \"b820cd7c-38de-4f87-8251-c87915e53cb7\") " pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:20 crc kubenswrapper[4754]: I0105 20:23:20.101948 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtnh\" (UniqueName: \"kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh\") pod \"openstack-operator-index-xvlln\" (UID: \"b820cd7c-38de-4f87-8251-c87915e53cb7\") " pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:20 crc kubenswrapper[4754]: I0105 20:23:20.133405 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:20 crc kubenswrapper[4754]: I0105 20:23:20.438639 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:20 crc kubenswrapper[4754]: I0105 20:23:20.765534 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvlln" event={"ID":"b820cd7c-38de-4f87-8251-c87915e53cb7","Type":"ContainerStarted","Data":"ecbcc9e0f50c28bd3a0cf20606be8d3d257de813e1e26c6c19db9b0a142f1129"} Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.185060 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.787212 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zv7sc"] Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.788738 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.795782 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvlln" event={"ID":"b820cd7c-38de-4f87-8251-c87915e53cb7","Type":"ContainerStarted","Data":"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad"} Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.795949 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xvlln" podUID="b820cd7c-38de-4f87-8251-c87915e53cb7" containerName="registry-server" containerID="cri-o://319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad" gracePeriod=2 Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.807498 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zv7sc"] Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.847826 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xvlln" podStartSLOduration=2.010584831 podStartE2EDuration="4.847806674s" podCreationTimestamp="2026-01-05 20:23:19 +0000 UTC" firstStartedPulling="2026-01-05 20:23:20.454690578 +0000 UTC m=+1087.163874452" lastFinishedPulling="2026-01-05 20:23:23.291912421 +0000 UTC m=+1090.001096295" observedRunningTime="2026-01-05 20:23:23.844144348 +0000 UTC m=+1090.553328222" watchObservedRunningTime="2026-01-05 20:23:23.847806674 +0000 UTC m=+1090.556990548" Jan 05 20:23:23 crc kubenswrapper[4754]: I0105 20:23:23.975418 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5g2j\" (UniqueName: \"kubernetes.io/projected/3f2e87f2-d218-4699-81bb-6156676884d3-kube-api-access-c5g2j\") pod \"openstack-operator-index-zv7sc\" (UID: \"3f2e87f2-d218-4699-81bb-6156676884d3\") " pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.077255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5g2j\" (UniqueName: \"kubernetes.io/projected/3f2e87f2-d218-4699-81bb-6156676884d3-kube-api-access-c5g2j\") pod \"openstack-operator-index-zv7sc\" (UID: \"3f2e87f2-d218-4699-81bb-6156676884d3\") " pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.117037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5g2j\" (UniqueName: \"kubernetes.io/projected/3f2e87f2-d218-4699-81bb-6156676884d3-kube-api-access-c5g2j\") pod \"openstack-operator-index-zv7sc\" (UID: \"3f2e87f2-d218-4699-81bb-6156676884d3\") " pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.118694 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.291842 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.381872 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbtnh\" (UniqueName: \"kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh\") pod \"b820cd7c-38de-4f87-8251-c87915e53cb7\" (UID: \"b820cd7c-38de-4f87-8251-c87915e53cb7\") " Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.395504 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh" (OuterVolumeSpecName: "kube-api-access-lbtnh") pod "b820cd7c-38de-4f87-8251-c87915e53cb7" (UID: "b820cd7c-38de-4f87-8251-c87915e53cb7"). InnerVolumeSpecName "kube-api-access-lbtnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.483382 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbtnh\" (UniqueName: \"kubernetes.io/projected/b820cd7c-38de-4f87-8251-c87915e53cb7-kube-api-access-lbtnh\") on node \"crc\" DevicePath \"\"" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.723144 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zv7sc"] Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.810235 4754 generic.go:334] "Generic (PLEG): container finished" podID="b820cd7c-38de-4f87-8251-c87915e53cb7" containerID="319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad" exitCode=0 Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.810347 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvlln" event={"ID":"b820cd7c-38de-4f87-8251-c87915e53cb7","Type":"ContainerDied","Data":"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad"} Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.810394 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvlln" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.810438 4754 scope.go:117] "RemoveContainer" containerID="319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.810420 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvlln" event={"ID":"b820cd7c-38de-4f87-8251-c87915e53cb7","Type":"ContainerDied","Data":"ecbcc9e0f50c28bd3a0cf20606be8d3d257de813e1e26c6c19db9b0a142f1129"} Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.813725 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zv7sc" event={"ID":"3f2e87f2-d218-4699-81bb-6156676884d3","Type":"ContainerStarted","Data":"24cb7ec00527b9cd116b8ff3ef48b3feb52bf0964fd2cc8e22e7f5554ff2a37d"} Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.859356 4754 scope.go:117] "RemoveContainer" containerID="319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad" Jan 05 20:23:24 crc kubenswrapper[4754]: E0105 20:23:24.861665 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad\": container with ID starting with 319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad not found: ID does not exist" containerID="319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.861729 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad"} err="failed to get container status \"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad\": rpc error: code = NotFound desc = could not find container \"319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad\": container with ID starting with 319ebb4b0e2bdbe8a36b3974f770086440b4fa3bdfc5836289b23412201358ad not found: ID does not exist" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.925846 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.973177 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.990684 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xvlln"] Jan 05 20:23:24 crc kubenswrapper[4754]: I0105 20:23:24.994112 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-b62cb" Jan 05 20:23:25 crc kubenswrapper[4754]: I0105 20:23:25.607203 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b820cd7c-38de-4f87-8251-c87915e53cb7" path="/var/lib/kubelet/pods/b820cd7c-38de-4f87-8251-c87915e53cb7/volumes" Jan 05 20:23:25 crc kubenswrapper[4754]: I0105 20:23:25.828504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zv7sc" event={"ID":"3f2e87f2-d218-4699-81bb-6156676884d3","Type":"ContainerStarted","Data":"2b14d9a6598cfb377f69af4316f7da54c5f5a60c8cd48e91d026e6c7d67b6113"} Jan 05 20:23:25 crc kubenswrapper[4754]: I0105 20:23:25.853192 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zv7sc" podStartSLOduration=2.791336732 podStartE2EDuration="2.853167761s" podCreationTimestamp="2026-01-05 20:23:23 +0000 UTC" firstStartedPulling="2026-01-05 20:23:24.73159503 +0000 UTC m=+1091.440778914" lastFinishedPulling="2026-01-05 20:23:24.793426029 +0000 UTC m=+1091.502609943" observedRunningTime="2026-01-05 20:23:25.845911921 +0000 UTC m=+1092.555095845" watchObservedRunningTime="2026-01-05 20:23:25.853167761 +0000 UTC m=+1092.562351665" Jan 05 20:23:34 crc kubenswrapper[4754]: I0105 20:23:34.120856 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:34 crc kubenswrapper[4754]: I0105 20:23:34.121389 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:34 crc kubenswrapper[4754]: I0105 20:23:34.148249 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:34 crc kubenswrapper[4754]: I0105 20:23:34.913133 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gkzx4" Jan 05 20:23:34 crc kubenswrapper[4754]: I0105 20:23:34.981031 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zv7sc" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.673971 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7"] Jan 05 20:23:36 crc kubenswrapper[4754]: E0105 20:23:36.674399 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b820cd7c-38de-4f87-8251-c87915e53cb7" containerName="registry-server" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.674415 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b820cd7c-38de-4f87-8251-c87915e53cb7" containerName="registry-server" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.674635 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b820cd7c-38de-4f87-8251-c87915e53cb7" containerName="registry-server" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.676144 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.679716 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-b5v5q" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.742621 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7"] Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.777481 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7j5\" (UniqueName: \"kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.777556 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.777606 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.879351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7j5\" (UniqueName: \"kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.879428 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.879489 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.880027 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.880344 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.906874 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7j5\" (UniqueName: \"kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5\") pod \"cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:36 crc kubenswrapper[4754]: I0105 20:23:36.992573 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:37 crc kubenswrapper[4754]: I0105 20:23:37.481084 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7"] Jan 05 20:23:37 crc kubenswrapper[4754]: W0105 20:23:37.489350 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3717de5_c997_42e5_85ad_ed9de666f879.slice/crio-84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a WatchSource:0}: Error finding container 84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a: Status 404 returned error can't find the container with id 84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a Jan 05 20:23:37 crc kubenswrapper[4754]: I0105 20:23:37.957431 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" event={"ID":"a3717de5-c997-42e5-85ad-ed9de666f879","Type":"ContainerStarted","Data":"84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a"} Jan 05 20:23:38 crc kubenswrapper[4754]: I0105 20:23:38.968587 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3717de5-c997-42e5-85ad-ed9de666f879" containerID="b626046774c31546327d5e72effdf4afb8f05c3ecb1b0d4bc2da67252c831706" exitCode=0 Jan 05 20:23:38 crc kubenswrapper[4754]: I0105 20:23:38.968664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" event={"ID":"a3717de5-c997-42e5-85ad-ed9de666f879","Type":"ContainerDied","Data":"b626046774c31546327d5e72effdf4afb8f05c3ecb1b0d4bc2da67252c831706"} Jan 05 20:23:39 crc kubenswrapper[4754]: I0105 20:23:39.980143 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3717de5-c997-42e5-85ad-ed9de666f879" containerID="da28a957c1f1a01b974d3f501fcbce6bc41dd66d4c853bbbb6238c8f2e737c68" exitCode=0 Jan 05 20:23:39 crc kubenswrapper[4754]: I0105 20:23:39.980256 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" event={"ID":"a3717de5-c997-42e5-85ad-ed9de666f879","Type":"ContainerDied","Data":"da28a957c1f1a01b974d3f501fcbce6bc41dd66d4c853bbbb6238c8f2e737c68"} Jan 05 20:23:40 crc kubenswrapper[4754]: I0105 20:23:40.994867 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3717de5-c997-42e5-85ad-ed9de666f879" containerID="b877444eb37c11e310f145e575bfbe1318f3a605358bafd66064a40b0c78e874" exitCode=0 Jan 05 20:23:40 crc kubenswrapper[4754]: I0105 20:23:40.994966 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" event={"ID":"a3717de5-c997-42e5-85ad-ed9de666f879","Type":"ContainerDied","Data":"b877444eb37c11e310f145e575bfbe1318f3a605358bafd66064a40b0c78e874"} Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.423016 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.598496 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle\") pod \"a3717de5-c997-42e5-85ad-ed9de666f879\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.598773 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util\") pod \"a3717de5-c997-42e5-85ad-ed9de666f879\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.598923 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7j5\" (UniqueName: \"kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5\") pod \"a3717de5-c997-42e5-85ad-ed9de666f879\" (UID: \"a3717de5-c997-42e5-85ad-ed9de666f879\") " Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.599619 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle" (OuterVolumeSpecName: "bundle") pod "a3717de5-c997-42e5-85ad-ed9de666f879" (UID: "a3717de5-c997-42e5-85ad-ed9de666f879"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.607244 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5" (OuterVolumeSpecName: "kube-api-access-wq7j5") pod "a3717de5-c997-42e5-85ad-ed9de666f879" (UID: "a3717de5-c997-42e5-85ad-ed9de666f879"). InnerVolumeSpecName "kube-api-access-wq7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.631379 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util" (OuterVolumeSpecName: "util") pod "a3717de5-c997-42e5-85ad-ed9de666f879" (UID: "a3717de5-c997-42e5-85ad-ed9de666f879"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.702168 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-util\") on node \"crc\" DevicePath \"\"" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.702578 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7j5\" (UniqueName: \"kubernetes.io/projected/a3717de5-c997-42e5-85ad-ed9de666f879-kube-api-access-wq7j5\") on node \"crc\" DevicePath \"\"" Jan 05 20:23:42 crc kubenswrapper[4754]: I0105 20:23:42.702622 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3717de5-c997-42e5-85ad-ed9de666f879-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:23:43 crc kubenswrapper[4754]: I0105 20:23:43.017212 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" event={"ID":"a3717de5-c997-42e5-85ad-ed9de666f879","Type":"ContainerDied","Data":"84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a"} Jan 05 20:23:43 crc kubenswrapper[4754]: I0105 20:23:43.017634 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dff4bfccc15c3da4bfc5a5615083be693112a28bae4cfe951e137d3b1faa7a" Jan 05 20:23:43 crc kubenswrapper[4754]: I0105 20:23:43.017327 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.821730 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf"] Jan 05 20:23:44 crc kubenswrapper[4754]: E0105 20:23:44.822988 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="pull" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.823005 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="pull" Jan 05 20:23:44 crc kubenswrapper[4754]: E0105 20:23:44.823069 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="extract" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.823076 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="extract" Jan 05 20:23:44 crc kubenswrapper[4754]: E0105 20:23:44.823090 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="util" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.823097 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="util" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.823404 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3717de5-c997-42e5-85ad-ed9de666f879" containerName="extract" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.824133 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.831551 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7ls49" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.850804 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vxj\" (UniqueName: \"kubernetes.io/projected/6836e11d-3e01-4752-ba84-0ba74829283f-kube-api-access-88vxj\") pod \"openstack-operator-controller-operator-7bb596d4b9-gbtbf\" (UID: \"6836e11d-3e01-4752-ba84-0ba74829283f\") " pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.870116 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf"] Jan 05 20:23:44 crc kubenswrapper[4754]: I0105 20:23:44.953429 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vxj\" (UniqueName: \"kubernetes.io/projected/6836e11d-3e01-4752-ba84-0ba74829283f-kube-api-access-88vxj\") pod \"openstack-operator-controller-operator-7bb596d4b9-gbtbf\" (UID: \"6836e11d-3e01-4752-ba84-0ba74829283f\") " pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:45 crc kubenswrapper[4754]: I0105 20:23:45.003485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vxj\" (UniqueName: \"kubernetes.io/projected/6836e11d-3e01-4752-ba84-0ba74829283f-kube-api-access-88vxj\") pod \"openstack-operator-controller-operator-7bb596d4b9-gbtbf\" (UID: \"6836e11d-3e01-4752-ba84-0ba74829283f\") " pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:45 crc kubenswrapper[4754]: I0105 20:23:45.143255 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:45 crc kubenswrapper[4754]: I0105 20:23:45.700939 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf"] Jan 05 20:23:46 crc kubenswrapper[4754]: I0105 20:23:46.047468 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" event={"ID":"6836e11d-3e01-4752-ba84-0ba74829283f","Type":"ContainerStarted","Data":"092251db7970d88a7fbc3484c1a3cb32610d283709b1d8b1254c5ca03d36a87a"} Jan 05 20:23:48 crc kubenswrapper[4754]: I0105 20:23:48.108795 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:23:48 crc kubenswrapper[4754]: I0105 20:23:48.109129 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:23:51 crc kubenswrapper[4754]: I0105 20:23:51.108172 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" event={"ID":"6836e11d-3e01-4752-ba84-0ba74829283f","Type":"ContainerStarted","Data":"4babf78291ed020b6cef72a90c1fc8aa540735309e1447e38eaf31727c9173da"} Jan 05 20:23:51 crc kubenswrapper[4754]: I0105 20:23:51.108636 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:23:51 crc kubenswrapper[4754]: I0105 20:23:51.143252 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" podStartSLOduration=2.760347865 podStartE2EDuration="7.143227641s" podCreationTimestamp="2026-01-05 20:23:44 +0000 UTC" firstStartedPulling="2026-01-05 20:23:45.723176923 +0000 UTC m=+1112.432360797" lastFinishedPulling="2026-01-05 20:23:50.106056699 +0000 UTC m=+1116.815240573" observedRunningTime="2026-01-05 20:23:51.139810272 +0000 UTC m=+1117.848994146" watchObservedRunningTime="2026-01-05 20:23:51.143227641 +0000 UTC m=+1117.852411535" Jan 05 20:23:55 crc kubenswrapper[4754]: I0105 20:23:55.147105 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.369382 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.370851 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.372979 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jjwgz" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.375770 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.379774 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.382077 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xxkf4" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.391003 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.397944 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.399131 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.404503 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.404787 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kz2cr" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.415795 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqjw\" (UniqueName: \"kubernetes.io/projected/2aeeabff-cc4c-49b1-a895-c21ae9d43e3d-kube-api-access-2bqjw\") pod \"cinder-operator-controller-manager-78979fc445-h2wmj\" (UID: \"2aeeabff-cc4c-49b1-a895-c21ae9d43e3d\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.415885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wvr\" (UniqueName: \"kubernetes.io/projected/f289c3c4-ad02-4022-ac22-239133f6c1ca-kube-api-access-p6wvr\") pod \"designate-operator-controller-manager-66f8b87655-dqks7\" (UID: \"f289c3c4-ad02-4022-ac22-239133f6c1ca\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.415930 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnlw\" (UniqueName: \"kubernetes.io/projected/83a7e6b7-db24-4f2f-988d-ed13a27a06af-kube-api-access-6nnlw\") pod \"barbican-operator-controller-manager-f6f74d6db-8t5gl\" (UID: \"83a7e6b7-db24-4f2f-988d-ed13a27a06af\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.416565 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.437571 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.438678 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.444648 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ww5v2" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.478502 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-7b945"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.479980 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.485561 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hvwsj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.494388 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.517891 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqxn\" (UniqueName: \"kubernetes.io/projected/f1a3a024-3293-4e7b-b1cd-c93c914c190e-kube-api-access-4bqxn\") pod \"heat-operator-controller-manager-658dd65b86-7b945\" (UID: \"f1a3a024-3293-4e7b-b1cd-c93c914c190e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.517981 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqjw\" (UniqueName: \"kubernetes.io/projected/2aeeabff-cc4c-49b1-a895-c21ae9d43e3d-kube-api-access-2bqjw\") pod \"cinder-operator-controller-manager-78979fc445-h2wmj\" (UID: \"2aeeabff-cc4c-49b1-a895-c21ae9d43e3d\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.518003 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zs67\" (UniqueName: \"kubernetes.io/projected/92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f-kube-api-access-4zs67\") pod \"glance-operator-controller-manager-7b549fc966-lrhk6\" (UID: \"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.518044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6wvr\" (UniqueName: \"kubernetes.io/projected/f289c3c4-ad02-4022-ac22-239133f6c1ca-kube-api-access-p6wvr\") pod \"designate-operator-controller-manager-66f8b87655-dqks7\" (UID: \"f289c3c4-ad02-4022-ac22-239133f6c1ca\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.518071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnlw\" (UniqueName: \"kubernetes.io/projected/83a7e6b7-db24-4f2f-988d-ed13a27a06af-kube-api-access-6nnlw\") pod \"barbican-operator-controller-manager-f6f74d6db-8t5gl\" (UID: \"83a7e6b7-db24-4f2f-988d-ed13a27a06af\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.518527 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-7b945"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.528781 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.529826 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.534514 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6s9cb" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.543411 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.544342 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.549588 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.554855 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2c99f" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.576283 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.619864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqxn\" (UniqueName: \"kubernetes.io/projected/f1a3a024-3293-4e7b-b1cd-c93c914c190e-kube-api-access-4bqxn\") pod \"heat-operator-controller-manager-658dd65b86-7b945\" (UID: \"f1a3a024-3293-4e7b-b1cd-c93c914c190e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.619932 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.619997 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zs67\" (UniqueName: \"kubernetes.io/projected/92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f-kube-api-access-4zs67\") pod \"glance-operator-controller-manager-7b549fc966-lrhk6\" (UID: \"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.620025 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2pzt\" (UniqueName: \"kubernetes.io/projected/1f664632-a6e1-491d-b0cf-be1717a6d28b-kube-api-access-b2pzt\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-6lf69\" (UID: \"1f664632-a6e1-491d-b0cf-be1717a6d28b\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.620076 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfqnw\" (UniqueName: \"kubernetes.io/projected/736d23ce-6bc0-439b-b1ff-86aad6363c2a-kube-api-access-mfqnw\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.707485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnlw\" (UniqueName: \"kubernetes.io/projected/83a7e6b7-db24-4f2f-988d-ed13a27a06af-kube-api-access-6nnlw\") pod \"barbican-operator-controller-manager-f6f74d6db-8t5gl\" (UID: \"83a7e6b7-db24-4f2f-988d-ed13a27a06af\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.721723 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2pzt\" (UniqueName: \"kubernetes.io/projected/1f664632-a6e1-491d-b0cf-be1717a6d28b-kube-api-access-b2pzt\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-6lf69\" (UID: \"1f664632-a6e1-491d-b0cf-be1717a6d28b\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.721813 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfqnw\" (UniqueName: \"kubernetes.io/projected/736d23ce-6bc0-439b-b1ff-86aad6363c2a-kube-api-access-mfqnw\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.721917 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: E0105 20:24:15.755440 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:15 crc kubenswrapper[4754]: E0105 20:24:15.755572 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert podName:736d23ce-6bc0-439b-b1ff-86aad6363c2a nodeName:}" failed. No retries permitted until 2026-01-05 20:24:16.255539489 +0000 UTC m=+1142.964723363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert") pod "infra-operator-controller-manager-6d99759cf-2g75d" (UID: "736d23ce-6bc0-439b-b1ff-86aad6363c2a") : secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.777225 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.777265 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.778683 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.778703 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-st6w9"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.789695 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.797989 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqjw\" (UniqueName: \"kubernetes.io/projected/2aeeabff-cc4c-49b1-a895-c21ae9d43e3d-kube-api-access-2bqjw\") pod \"cinder-operator-controller-manager-78979fc445-h2wmj\" (UID: \"2aeeabff-cc4c-49b1-a895-c21ae9d43e3d\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.798342 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fwn5b" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.799872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqxn\" (UniqueName: \"kubernetes.io/projected/f1a3a024-3293-4e7b-b1cd-c93c914c190e-kube-api-access-4bqxn\") pod \"heat-operator-controller-manager-658dd65b86-7b945\" (UID: \"f1a3a024-3293-4e7b-b1cd-c93c914c190e\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.800914 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-st6w9"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.800942 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.803157 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.803403 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.803592 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.807362 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.807550 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.812017 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7b5kg" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.812262 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xz8r9" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.816408 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-b6kdm" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.819579 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.819710 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.819967 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.823560 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6wvr\" (UniqueName: \"kubernetes.io/projected/f289c3c4-ad02-4022-ac22-239133f6c1ca-kube-api-access-p6wvr\") pod \"designate-operator-controller-manager-66f8b87655-dqks7\" (UID: \"f289c3c4-ad02-4022-ac22-239133f6c1ca\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.825962 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zs67\" (UniqueName: \"kubernetes.io/projected/92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f-kube-api-access-4zs67\") pod \"glance-operator-controller-manager-7b549fc966-lrhk6\" (UID: \"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.845040 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fqwv4" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.846996 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2pzt\" (UniqueName: \"kubernetes.io/projected/1f664632-a6e1-491d-b0cf-be1717a6d28b-kube-api-access-b2pzt\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-6lf69\" (UID: \"1f664632-a6e1-491d-b0cf-be1717a6d28b\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.847533 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfqnw\" (UniqueName: \"kubernetes.io/projected/736d23ce-6bc0-439b-b1ff-86aad6363c2a-kube-api-access-mfqnw\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.861533 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.889170 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.891202 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.897694 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.899218 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.899422 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hb8t7" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.902123 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7fg9j" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.904593 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.912338 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.913617 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.915438 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.917601 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-b2fq8" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.924667 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.925720 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.927462 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nwjw4" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.938854 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.956324 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957491 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pff8c\" (UniqueName: \"kubernetes.io/projected/8df02427-4d10-41bb-9798-82cf7b8bca3e-kube-api-access-pff8c\") pod \"mariadb-operator-controller-manager-7b88bfc995-gj5xk\" (UID: \"8df02427-4d10-41bb-9798-82cf7b8bca3e\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957550 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwkd\" (UniqueName: \"kubernetes.io/projected/91877573-8199-4055-988f-96bd6469af4f-kube-api-access-zwwkd\") pod \"keystone-operator-controller-manager-568985c78-st6w9\" (UID: \"91877573-8199-4055-988f-96bd6469af4f\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957601 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb2t\" (UniqueName: \"kubernetes.io/projected/dd93e799-6591-41d5-988a-18cc6d8c836d-kube-api-access-lrb2t\") pod \"ironic-operator-controller-manager-f99f54bc8-brnwf\" (UID: \"dd93e799-6591-41d5-988a-18cc6d8c836d\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957632 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvt6r\" (UniqueName: \"kubernetes.io/projected/83cc207a-0725-4775-b2f7-93c71985ba1e-kube-api-access-wvt6r\") pod \"manila-operator-controller-manager-598945d5b8-hdh6j\" (UID: \"83cc207a-0725-4775-b2f7-93c71985ba1e\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957718 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwm2z\" (UniqueName: \"kubernetes.io/projected/0cd346d8-d14a-404e-b2fa-16fc917e6886-kube-api-access-pwm2z\") pod \"neutron-operator-controller-manager-7cd87b778f-swt4w\" (UID: \"0cd346d8-d14a-404e-b2fa-16fc917e6886\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.957733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.960367 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zrfhp" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.964942 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.966104 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.969375 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ffxfc" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.973343 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.981872 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.985689 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.990153 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp"] Jan 05 20:24:15 crc kubenswrapper[4754]: I0105 20:24:15.999800 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.001044 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.006374 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vl2wd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.012430 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.022062 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.029782 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.055255 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.062473 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.063061 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.066450 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2dqsv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076261 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pff8c\" (UniqueName: \"kubernetes.io/projected/8df02427-4d10-41bb-9798-82cf7b8bca3e-kube-api-access-pff8c\") pod \"mariadb-operator-controller-manager-7b88bfc995-gj5xk\" (UID: \"8df02427-4d10-41bb-9798-82cf7b8bca3e\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076394 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svl9\" (UniqueName: \"kubernetes.io/projected/db5f9ab8-2422-439c-a857-23f918cfa919-kube-api-access-8svl9\") pod \"ovn-operator-controller-manager-bf6d4f946-jngdn\" (UID: \"db5f9ab8-2422-439c-a857-23f918cfa919\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076415 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c457q\" (UniqueName: \"kubernetes.io/projected/82f028d6-51a7-461a-ae7d-cd2da5f47afb-kube-api-access-c457q\") pod \"placement-operator-controller-manager-9b6f8f78c-jcntk\" (UID: \"82f028d6-51a7-461a-ae7d-cd2da5f47afb\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076450 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwwkd\" (UniqueName: \"kubernetes.io/projected/91877573-8199-4055-988f-96bd6469af4f-kube-api-access-zwwkd\") pod \"keystone-operator-controller-manager-568985c78-st6w9\" (UID: \"91877573-8199-4055-988f-96bd6469af4f\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrb2t\" (UniqueName: \"kubernetes.io/projected/dd93e799-6591-41d5-988a-18cc6d8c836d-kube-api-access-lrb2t\") pod \"ironic-operator-controller-manager-f99f54bc8-brnwf\" (UID: \"dd93e799-6591-41d5-988a-18cc6d8c836d\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntlws\" (UniqueName: \"kubernetes.io/projected/fed06176-d7ad-4373-84df-204b6fdbf5cf-kube-api-access-ntlws\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076604 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvt6r\" (UniqueName: \"kubernetes.io/projected/83cc207a-0725-4775-b2f7-93c71985ba1e-kube-api-access-wvt6r\") pod \"manila-operator-controller-manager-598945d5b8-hdh6j\" (UID: \"83cc207a-0725-4775-b2f7-93c71985ba1e\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076655 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7brs\" (UniqueName: \"kubernetes.io/projected/983e4f4a-fe90-4460-ad97-b6955a888933-kube-api-access-s7brs\") pod \"octavia-operator-controller-manager-68c649d9d-69j2s\" (UID: \"983e4f4a-fe90-4460-ad97-b6955a888933\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076731 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l984\" (UniqueName: \"kubernetes.io/projected/6d71c5c9-f75a-475f-880c-d234d43ad7d9-kube-api-access-9l984\") pod \"nova-operator-controller-manager-5fbbf8b6cc-h56wh\" (UID: \"6d71c5c9-f75a-475f-880c-d234d43ad7d9\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.076798 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwm2z\" (UniqueName: \"kubernetes.io/projected/0cd346d8-d14a-404e-b2fa-16fc917e6886-kube-api-access-pwm2z\") pod \"neutron-operator-controller-manager-7cd87b778f-swt4w\" (UID: \"0cd346d8-d14a-404e-b2fa-16fc917e6886\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.096215 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.108524 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwm2z\" (UniqueName: \"kubernetes.io/projected/0cd346d8-d14a-404e-b2fa-16fc917e6886-kube-api-access-pwm2z\") pod \"neutron-operator-controller-manager-7cd87b778f-swt4w\" (UID: \"0cd346d8-d14a-404e-b2fa-16fc917e6886\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.110214 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.110426 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrb2t\" (UniqueName: \"kubernetes.io/projected/dd93e799-6591-41d5-988a-18cc6d8c836d-kube-api-access-lrb2t\") pod \"ironic-operator-controller-manager-f99f54bc8-brnwf\" (UID: \"dd93e799-6591-41d5-988a-18cc6d8c836d\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.111554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwwkd\" (UniqueName: \"kubernetes.io/projected/91877573-8199-4055-988f-96bd6469af4f-kube-api-access-zwwkd\") pod \"keystone-operator-controller-manager-568985c78-st6w9\" (UID: \"91877573-8199-4055-988f-96bd6469af4f\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.112331 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pff8c\" (UniqueName: \"kubernetes.io/projected/8df02427-4d10-41bb-9798-82cf7b8bca3e-kube-api-access-pff8c\") pod \"mariadb-operator-controller-manager-7b88bfc995-gj5xk\" (UID: \"8df02427-4d10-41bb-9798-82cf7b8bca3e\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.116703 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvt6r\" (UniqueName: \"kubernetes.io/projected/83cc207a-0725-4775-b2f7-93c71985ba1e-kube-api-access-wvt6r\") pod \"manila-operator-controller-manager-598945d5b8-hdh6j\" (UID: \"83cc207a-0725-4775-b2f7-93c71985ba1e\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.124113 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.126490 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.127758 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.130152 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gjckr" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180697 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svl9\" (UniqueName: \"kubernetes.io/projected/db5f9ab8-2422-439c-a857-23f918cfa919-kube-api-access-8svl9\") pod \"ovn-operator-controller-manager-bf6d4f946-jngdn\" (UID: \"db5f9ab8-2422-439c-a857-23f918cfa919\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180761 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwq9q\" (UniqueName: \"kubernetes.io/projected/4d09717a-7822-46ae-8192-62aa7305304b-kube-api-access-xwq9q\") pod \"test-operator-controller-manager-6c866cfdcb-944qj\" (UID: \"4d09717a-7822-46ae-8192-62aa7305304b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180782 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c457q\" (UniqueName: \"kubernetes.io/projected/82f028d6-51a7-461a-ae7d-cd2da5f47afb-kube-api-access-c457q\") pod \"placement-operator-controller-manager-9b6f8f78c-jcntk\" (UID: \"82f028d6-51a7-461a-ae7d-cd2da5f47afb\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-648vv\" (UniqueName: \"kubernetes.io/projected/77f4456d-e6a6-466a-a74c-5276e4951784-kube-api-access-648vv\") pod \"watcher-operator-controller-manager-9dbdf6486-s4n44\" (UID: \"77f4456d-e6a6-466a-a74c-5276e4951784\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlws\" (UniqueName: \"kubernetes.io/projected/fed06176-d7ad-4373-84df-204b6fdbf5cf-kube-api-access-ntlws\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180908 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7t4\" (UniqueName: \"kubernetes.io/projected/a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a-kube-api-access-vn7t4\") pod \"swift-operator-controller-manager-bb586bbf4-hmwzd\" (UID: \"a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180929 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7brs\" (UniqueName: \"kubernetes.io/projected/983e4f4a-fe90-4460-ad97-b6955a888933-kube-api-access-s7brs\") pod \"octavia-operator-controller-manager-68c649d9d-69j2s\" (UID: \"983e4f4a-fe90-4460-ad97-b6955a888933\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180948 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktz4b\" (UniqueName: \"kubernetes.io/projected/4b33baa5-64bb-4df7-ac22-925d718f9d60-kube-api-access-ktz4b\") pod \"telemetry-operator-controller-manager-688488f44f-62gsk\" (UID: \"4b33baa5-64bb-4df7-ac22-925d718f9d60\") " pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.180985 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l984\" (UniqueName: \"kubernetes.io/projected/6d71c5c9-f75a-475f-880c-d234d43ad7d9-kube-api-access-9l984\") pod \"nova-operator-controller-manager-5fbbf8b6cc-h56wh\" (UID: \"6d71c5c9-f75a-475f-880c-d234d43ad7d9\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.182221 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.182267 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert podName:fed06176-d7ad-4373-84df-204b6fdbf5cf nodeName:}" failed. No retries permitted until 2026-01-05 20:24:16.68225345 +0000 UTC m=+1143.391437324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" (UID: "fed06176-d7ad-4373-84df-204b6fdbf5cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.199401 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l984\" (UniqueName: \"kubernetes.io/projected/6d71c5c9-f75a-475f-880c-d234d43ad7d9-kube-api-access-9l984\") pod \"nova-operator-controller-manager-5fbbf8b6cc-h56wh\" (UID: \"6d71c5c9-f75a-475f-880c-d234d43ad7d9\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.199735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svl9\" (UniqueName: \"kubernetes.io/projected/db5f9ab8-2422-439c-a857-23f918cfa919-kube-api-access-8svl9\") pod \"ovn-operator-controller-manager-bf6d4f946-jngdn\" (UID: \"db5f9ab8-2422-439c-a857-23f918cfa919\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.200095 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.200706 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.201836 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.209355 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.209544 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.209730 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mdldp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.214123 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7brs\" (UniqueName: \"kubernetes.io/projected/983e4f4a-fe90-4460-ad97-b6955a888933-kube-api-access-s7brs\") pod \"octavia-operator-controller-manager-68c649d9d-69j2s\" (UID: \"983e4f4a-fe90-4460-ad97-b6955a888933\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.214240 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlws\" (UniqueName: \"kubernetes.io/projected/fed06176-d7ad-4373-84df-204b6fdbf5cf-kube-api-access-ntlws\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.215931 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.218286 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.238956 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c457q\" (UniqueName: \"kubernetes.io/projected/82f028d6-51a7-461a-ae7d-cd2da5f47afb-kube-api-access-c457q\") pod \"placement-operator-controller-manager-9b6f8f78c-jcntk\" (UID: \"82f028d6-51a7-461a-ae7d-cd2da5f47afb\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.240363 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.240668 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.243722 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rpdqk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.248853 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.259005 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.276454 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.297953 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.300833 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktz4b\" (UniqueName: \"kubernetes.io/projected/4b33baa5-64bb-4df7-ac22-925d718f9d60-kube-api-access-ktz4b\") pod \"telemetry-operator-controller-manager-688488f44f-62gsk\" (UID: \"4b33baa5-64bb-4df7-ac22-925d718f9d60\") " pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.300981 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301042 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwq9q\" (UniqueName: \"kubernetes.io/projected/4d09717a-7822-46ae-8192-62aa7305304b-kube-api-access-xwq9q\") pod \"test-operator-controller-manager-6c866cfdcb-944qj\" (UID: \"4d09717a-7822-46ae-8192-62aa7305304b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301125 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-648vv\" (UniqueName: \"kubernetes.io/projected/77f4456d-e6a6-466a-a74c-5276e4951784-kube-api-access-648vv\") pod \"watcher-operator-controller-manager-9dbdf6486-s4n44\" (UID: \"77f4456d-e6a6-466a-a74c-5276e4951784\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlk6c\" (UniqueName: \"kubernetes.io/projected/a078215d-9fb5-413f-b542-ca5b3c6fb296-kube-api-access-dlk6c\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301196 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301266 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.301318 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7t4\" (UniqueName: \"kubernetes.io/projected/a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a-kube-api-access-vn7t4\") pod \"swift-operator-controller-manager-bb586bbf4-hmwzd\" (UID: \"a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.302116 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.302188 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert podName:736d23ce-6bc0-439b-b1ff-86aad6363c2a nodeName:}" failed. No retries permitted until 2026-01-05 20:24:17.302157409 +0000 UTC m=+1144.011341283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert") pod "infra-operator-controller-manager-6d99759cf-2g75d" (UID: "736d23ce-6bc0-439b-b1ff-86aad6363c2a") : secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.326050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7t4\" (UniqueName: \"kubernetes.io/projected/a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a-kube-api-access-vn7t4\") pod \"swift-operator-controller-manager-bb586bbf4-hmwzd\" (UID: \"a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.335088 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwq9q\" (UniqueName: \"kubernetes.io/projected/4d09717a-7822-46ae-8192-62aa7305304b-kube-api-access-xwq9q\") pod \"test-operator-controller-manager-6c866cfdcb-944qj\" (UID: \"4d09717a-7822-46ae-8192-62aa7305304b\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.354648 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.360342 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktz4b\" (UniqueName: \"kubernetes.io/projected/4b33baa5-64bb-4df7-ac22-925d718f9d60-kube-api-access-ktz4b\") pod \"telemetry-operator-controller-manager-688488f44f-62gsk\" (UID: \"4b33baa5-64bb-4df7-ac22-925d718f9d60\") " pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.368833 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.371216 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-648vv\" (UniqueName: \"kubernetes.io/projected/77f4456d-e6a6-466a-a74c-5276e4951784-kube-api-access-648vv\") pod \"watcher-operator-controller-manager-9dbdf6486-s4n44\" (UID: \"77f4456d-e6a6-466a-a74c-5276e4951784\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.382264 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.391648 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.397630 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.399841 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.403062 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwghl\" (UniqueName: \"kubernetes.io/projected/29d3a96c-7dee-4a63-945c-3fef7cdcc7e7-kube-api-access-jwghl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfsjd\" (UID: \"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.403110 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlk6c\" (UniqueName: \"kubernetes.io/projected/a078215d-9fb5-413f-b542-ca5b3c6fb296-kube-api-access-dlk6c\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.403156 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.403242 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.403440 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.403494 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:16.90347219 +0000 UTC m=+1143.612656054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "metrics-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.403706 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.403727 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:16.903720867 +0000 UTC m=+1143.612904741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.406587 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv"] Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.419758 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.427680 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlk6c\" (UniqueName: \"kubernetes.io/projected/a078215d-9fb5-413f-b542-ca5b3c6fb296-kube-api-access-dlk6c\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.439924 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.462631 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.511271 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwghl\" (UniqueName: \"kubernetes.io/projected/29d3a96c-7dee-4a63-945c-3fef7cdcc7e7-kube-api-access-jwghl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfsjd\" (UID: \"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.535826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwghl\" (UniqueName: \"kubernetes.io/projected/29d3a96c-7dee-4a63-945c-3fef7cdcc7e7-kube-api-access-jwghl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfsjd\" (UID: \"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.604044 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.714278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.714471 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.714529 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert podName:fed06176-d7ad-4373-84df-204b6fdbf5cf nodeName:}" failed. No retries permitted until 2026-01-05 20:24:17.714502523 +0000 UTC m=+1144.423686397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" (UID: "fed06176-d7ad-4373-84df-204b6fdbf5cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.917712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: I0105 20:24:16.918380 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.918258 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.918585 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:17.918569305 +0000 UTC m=+1144.627753179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "metrics-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.918922 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 20:24:16 crc kubenswrapper[4754]: E0105 20:24:16.918950 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:17.918941964 +0000 UTC m=+1144.628125838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.325144 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.325468 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.325648 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert podName:736d23ce-6bc0-439b-b1ff-86aad6363c2a nodeName:}" failed. No retries permitted until 2026-01-05 20:24:19.32560711 +0000 UTC m=+1146.034791014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert") pod "infra-operator-controller-manager-6d99759cf-2g75d" (UID: "736d23ce-6bc0-439b-b1ff-86aad6363c2a") : secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.501664 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj"] Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.512351 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7"] Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.518041 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69"] Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.552304 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl"] Jan 05 20:24:17 crc kubenswrapper[4754]: W0105 20:24:17.574539 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a7e6b7_db24_4f2f_988d_ed13a27a06af.slice/crio-d161e32e5a2ef2084f132be7ab56ea8a28db8a74f0d6d1858aa4637970cb723a WatchSource:0}: Error finding container d161e32e5a2ef2084f132be7ab56ea8a28db8a74f0d6d1858aa4637970cb723a: Status 404 returned error can't find the container with id d161e32e5a2ef2084f132be7ab56ea8a28db8a74f0d6d1858aa4637970cb723a Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.722315 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf"] Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.732207 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-7b945"] Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.734548 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.734666 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.734727 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert podName:fed06176-d7ad-4373-84df-204b6fdbf5cf nodeName:}" failed. No retries permitted until 2026-01-05 20:24:19.73470974 +0000 UTC m=+1146.443893614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" (UID: "fed06176-d7ad-4373-84df-204b6fdbf5cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.938015 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:17 crc kubenswrapper[4754]: I0105 20:24:17.938124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.938301 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.938349 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:19.938334321 +0000 UTC m=+1146.647518195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "webhook-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.938391 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 20:24:17 crc kubenswrapper[4754]: E0105 20:24:17.938411 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:19.938404562 +0000 UTC m=+1146.647588436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "metrics-server-cert" not found Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.110652 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.110721 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.110777 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.391472 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" event={"ID":"dd93e799-6591-41d5-988a-18cc6d8c836d","Type":"ContainerStarted","Data":"1298b5e05c11aeb8b09948eb4f0b15c2d583c56bf534f80e54c9f7c0ab79731a"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.395770 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" event={"ID":"83a7e6b7-db24-4f2f-988d-ed13a27a06af","Type":"ContainerStarted","Data":"d161e32e5a2ef2084f132be7ab56ea8a28db8a74f0d6d1858aa4637970cb723a"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.397826 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" event={"ID":"2aeeabff-cc4c-49b1-a895-c21ae9d43e3d","Type":"ContainerStarted","Data":"474409dfc3c918b7c93876d200db3de3c463ec2a6fef513d5fe106dcc83b3413"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.404624 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" event={"ID":"f1a3a024-3293-4e7b-b1cd-c93c914c190e","Type":"ContainerStarted","Data":"15db0afc7a11eff38b54273e5bc9481c63d7c52a91e2faa34fa61b1933371140"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.406005 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" event={"ID":"1f664632-a6e1-491d-b0cf-be1717a6d28b","Type":"ContainerStarted","Data":"fab2577f2a42c606397980a033ed9b841283de6b26b25da2740d493f7ce5dadb"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.411779 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.411905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" event={"ID":"f289c3c4-ad02-4022-ac22-239133f6c1ca","Type":"ContainerStarted","Data":"37a0198576797a5542483d83db38cbd1f4109412e3e845464620bc5a26b5e53e"} Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.411923 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919" gracePeriod=600 Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.659441 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.706258 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.732087 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.747917 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.773338 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.780645 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.796833 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.797986 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.811607 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-st6w9"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.826759 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.834355 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.840430 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.848715 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj"] Jan 05 20:24:18 crc kubenswrapper[4754]: I0105 20:24:18.871366 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk"] Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.929612 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pff8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-gj5xk_openstack-operators(8df02427-4d10-41bb-9798-82cf7b8bca3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.931481 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" Jan 05 20:24:18 crc kubenswrapper[4754]: W0105 20:24:18.939385 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92928a21_7bbe_44b9_9d2b_2fcce8d0dd1f.slice/crio-94e8360bde0278f0b43015608df3af9da24ac15d46419636fe068c35a133adcd WatchSource:0}: Error finding container 94e8360bde0278f0b43015608df3af9da24ac15d46419636fe068c35a133adcd: Status 404 returned error can't find the container with id 94e8360bde0278f0b43015608df3af9da24ac15d46419636fe068c35a133adcd Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.941021 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zs67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7b549fc966-lrhk6_openstack-operators(92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.942236 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.955184 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xwq9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-944qj_openstack-operators(4d09717a-7822-46ae-8192-62aa7305304b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.956588 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.968456 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-st6w9_openstack-operators(91877573-8199-4055-988f-96bd6469af4f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 20:24:18 crc kubenswrapper[4754]: E0105 20:24:18.970008 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.389218 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.389463 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.389555 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert podName:736d23ce-6bc0-439b-b1ff-86aad6363c2a nodeName:}" failed. No retries permitted until 2026-01-05 20:24:23.389532911 +0000 UTC m=+1150.098716815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert") pod "infra-operator-controller-manager-6d99759cf-2g75d" (UID: "736d23ce-6bc0-439b-b1ff-86aad6363c2a") : secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.461113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" event={"ID":"6d71c5c9-f75a-475f-880c-d234d43ad7d9","Type":"ContainerStarted","Data":"33b81be0013ad9989201d221dc204e18600d7e923fdf19403204f2dd1a9b3458"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.469036 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" event={"ID":"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7","Type":"ContainerStarted","Data":"1163eb27df550f7eddcdbabdcd6b08cad06bc32d9edf556c4e770307cfdbd47b"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.479874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" event={"ID":"0cd346d8-d14a-404e-b2fa-16fc917e6886","Type":"ContainerStarted","Data":"d86bb84816556d07c3e068e60362216d1bbbeba42a6e1d367674b91456d0b1ee"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.491212 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" event={"ID":"91877573-8199-4055-988f-96bd6469af4f","Type":"ContainerStarted","Data":"1eb34b9adfe089cdc5514a1c6b8e586f191ee7bc01148d9fbf6144c170fefba8"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.494866 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" event={"ID":"4d09717a-7822-46ae-8192-62aa7305304b","Type":"ContainerStarted","Data":"f55006cd169bf8ef026fef8112782e45bad3e42ca3bad4765dc6a9de6e46064f"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.495154 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" event={"ID":"983e4f4a-fe90-4460-ad97-b6955a888933","Type":"ContainerStarted","Data":"f76de3bae75aa3fcdfb88a3ed894ed8cfdac8d4d61890f035ddd3b5e85f6a325"} Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.495906 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.497431 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.498191 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" event={"ID":"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f","Type":"ContainerStarted","Data":"94e8360bde0278f0b43015608df3af9da24ac15d46419636fe068c35a133adcd"} Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.502491 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.539360 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919" exitCode=0 Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.539418 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.539505 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.539532 4754 scope.go:117] "RemoveContainer" containerID="d2a44ceb70c9b418a71e277628ae418adb7db249088112b846b1cbe05a8c0760" Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.542825 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" event={"ID":"8df02427-4d10-41bb-9798-82cf7b8bca3e","Type":"ContainerStarted","Data":"f0e0334bb6201c5a61d6004a55d62b684adcdff1b81a19d71784b0276265176d"} Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.545804 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.551283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" event={"ID":"77f4456d-e6a6-466a-a74c-5276e4951784","Type":"ContainerStarted","Data":"4f5b4a85d8813d0c58e7df3c6a400dc74ed9c43688c674c522b6ecef1f72ef97"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.554092 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" event={"ID":"4b33baa5-64bb-4df7-ac22-925d718f9d60","Type":"ContainerStarted","Data":"ed5b2cde7ae202a113bb675ac3da5fdba8afcde881a27977b2d8e59d03014b8f"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.557508 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" event={"ID":"a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a","Type":"ContainerStarted","Data":"be99d30cc443c55906b1e7cecd81b0f873cd3f67cb3535c94b5326304015d579"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.570530 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" event={"ID":"db5f9ab8-2422-439c-a857-23f918cfa919","Type":"ContainerStarted","Data":"a8ffd4404d53d93c995e094b77a57e993a5886ea714f94a889993b7f05f9a0c8"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.572680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" event={"ID":"83cc207a-0725-4775-b2f7-93c71985ba1e","Type":"ContainerStarted","Data":"db1455f7aca76e442d28070c8b1355dd614da81c2dca4506beaa9ab86673b4e3"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.574393 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" event={"ID":"82f028d6-51a7-461a-ae7d-cd2da5f47afb","Type":"ContainerStarted","Data":"bec4bd6e2810570716ae71bc34333a2f11eb25e617f49943b4fee45f298744d4"} Jan 05 20:24:19 crc kubenswrapper[4754]: I0105 20:24:19.798024 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.798226 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:19 crc kubenswrapper[4754]: E0105 20:24:19.798372 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert podName:fed06176-d7ad-4373-84df-204b6fdbf5cf nodeName:}" failed. No retries permitted until 2026-01-05 20:24:23.798281631 +0000 UTC m=+1150.507465505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" (UID: "fed06176-d7ad-4373-84df-204b6fdbf5cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:20 crc kubenswrapper[4754]: I0105 20:24:20.002246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.002405 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.002577 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:24.002558828 +0000 UTC m=+1150.711742702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "metrics-server-cert" not found Jan 05 20:24:20 crc kubenswrapper[4754]: I0105 20:24:20.002709 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.002794 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.002821 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:24.002813175 +0000 UTC m=+1150.711997049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "webhook-server-cert" not found Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.593205 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.593772 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.593873 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" Jan 05 20:24:20 crc kubenswrapper[4754]: E0105 20:24:20.594237 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" Jan 05 20:24:23 crc kubenswrapper[4754]: I0105 20:24:23.490188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:23 crc kubenswrapper[4754]: E0105 20:24:23.490527 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:23 crc kubenswrapper[4754]: E0105 20:24:23.490746 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert podName:736d23ce-6bc0-439b-b1ff-86aad6363c2a nodeName:}" failed. No retries permitted until 2026-01-05 20:24:31.490719923 +0000 UTC m=+1158.199903807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert") pod "infra-operator-controller-manager-6d99759cf-2g75d" (UID: "736d23ce-6bc0-439b-b1ff-86aad6363c2a") : secret "infra-operator-webhook-server-cert" not found Jan 05 20:24:23 crc kubenswrapper[4754]: I0105 20:24:23.922991 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:23 crc kubenswrapper[4754]: E0105 20:24:23.924111 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:23 crc kubenswrapper[4754]: E0105 20:24:23.924168 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert podName:fed06176-d7ad-4373-84df-204b6fdbf5cf nodeName:}" failed. No retries permitted until 2026-01-05 20:24:31.924152685 +0000 UTC m=+1158.633336559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" (UID: "fed06176-d7ad-4373-84df-204b6fdbf5cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 20:24:24 crc kubenswrapper[4754]: I0105 20:24:24.025776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:24 crc kubenswrapper[4754]: I0105 20:24:24.025898 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:24 crc kubenswrapper[4754]: E0105 20:24:24.026044 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 20:24:24 crc kubenswrapper[4754]: E0105 20:24:24.026148 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:32.026126748 +0000 UTC m=+1158.735310612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "metrics-server-cert" not found Jan 05 20:24:24 crc kubenswrapper[4754]: E0105 20:24:24.026072 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 20:24:24 crc kubenswrapper[4754]: E0105 20:24:24.026284 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs podName:a078215d-9fb5-413f-b542-ca5b3c6fb296 nodeName:}" failed. No retries permitted until 2026-01-05 20:24:32.026268712 +0000 UTC m=+1158.735452586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs") pod "openstack-operator-controller-manager-74f9c55c9c-f9rnv" (UID: "a078215d-9fb5-413f-b542-ca5b3c6fb296") : secret "webhook-server-cert" not found Jan 05 20:24:29 crc kubenswrapper[4754]: E0105 20:24:29.867056 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 05 20:24:29 crc kubenswrapper[4754]: E0105 20:24:29.869580 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bqxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-7b945_openstack-operators(f1a3a024-3293-4e7b-b1cd-c93c914c190e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:29 crc kubenswrapper[4754]: E0105 20:24:29.871444 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" Jan 05 20:24:30 crc kubenswrapper[4754]: E0105 20:24:30.708335 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" Jan 05 20:24:30 crc kubenswrapper[4754]: E0105 20:24:30.759489 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Jan 05 20:24:30 crc kubenswrapper[4754]: E0105 20:24:30.759759 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwm2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-swt4w_openstack-operators(0cd346d8-d14a-404e-b2fa-16fc917e6886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:30 crc kubenswrapper[4754]: E0105 20:24:30.761821 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" Jan 05 20:24:31 crc kubenswrapper[4754]: I0105 20:24:31.499396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:31 crc kubenswrapper[4754]: I0105 20:24:31.515449 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736d23ce-6bc0-439b-b1ff-86aad6363c2a-cert\") pod \"infra-operator-controller-manager-6d99759cf-2g75d\" (UID: \"736d23ce-6bc0-439b-b1ff-86aad6363c2a\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:31 crc kubenswrapper[4754]: I0105 20:24:31.655615 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:24:31 crc kubenswrapper[4754]: E0105 20:24:31.719136 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.012788 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.022202 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fed06176-d7ad-4373-84df-204b6fdbf5cf-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd72b9pp\" (UID: \"fed06176-d7ad-4373-84df-204b6fdbf5cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.114391 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.114643 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.119249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-metrics-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.119983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a078215d-9fb5-413f-b542-ca5b3c6fb296-webhook-certs\") pod \"openstack-operator-controller-manager-74f9c55c9c-f9rnv\" (UID: \"a078215d-9fb5-413f-b542-ca5b3c6fb296\") " pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.147528 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:24:32 crc kubenswrapper[4754]: I0105 20:24:32.209704 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.472186 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.472829 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrb2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-brnwf_openstack-operators(dd93e799-6591-41d5-988a-18cc6d8c836d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.474324 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.676873 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.677218 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8svl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-jngdn_openstack-operators(db5f9ab8-2422-439c-a857-23f918cfa919): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.678672 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podUID="db5f9ab8-2422-439c-a857-23f918cfa919" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.824972 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podUID="db5f9ab8-2422-439c-a857-23f918cfa919" Jan 05 20:24:40 crc kubenswrapper[4754]: E0105 20:24:40.825226 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" Jan 05 20:24:42 crc kubenswrapper[4754]: E0105 20:24:42.780375 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7" Jan 05 20:24:42 crc kubenswrapper[4754]: E0105 20:24:42.781874 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vn7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-hmwzd_openstack-operators(a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:42 crc kubenswrapper[4754]: E0105 20:24:42.783496 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" Jan 05 20:24:42 crc kubenswrapper[4754]: E0105 20:24:42.854599 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" Jan 05 20:24:44 crc kubenswrapper[4754]: E0105 20:24:44.657587 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Jan 05 20:24:44 crc kubenswrapper[4754]: E0105 20:24:44.658243 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6wvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66f8b87655-dqks7_openstack-operators(f289c3c4-ad02-4022-ac22-239133f6c1ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:44 crc kubenswrapper[4754]: E0105 20:24:44.659450 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podUID="f289c3c4-ad02-4022-ac22-239133f6c1ca" Jan 05 20:24:44 crc kubenswrapper[4754]: E0105 20:24:44.875269 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podUID="f289c3c4-ad02-4022-ac22-239133f6c1ca" Jan 05 20:24:45 crc kubenswrapper[4754]: E0105 20:24:45.416721 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3" Jan 05 20:24:45 crc kubenswrapper[4754]: E0105 20:24:45.417411 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2pzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-7f5ddd8d7b-6lf69_openstack-operators(1f664632-a6e1-491d-b0cf-be1717a6d28b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:45 crc kubenswrapper[4754]: E0105 20:24:45.418629 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podUID="1f664632-a6e1-491d-b0cf-be1717a6d28b" Jan 05 20:24:45 crc kubenswrapper[4754]: E0105 20:24:45.885543 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b7111c690e8fda3cb0c5969bcfa68308907fd0cf05f73ecdcb9ac1423aa7bba3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podUID="1f664632-a6e1-491d-b0cf-be1717a6d28b" Jan 05 20:24:46 crc kubenswrapper[4754]: E0105 20:24:46.098640 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420" Jan 05 20:24:46 crc kubenswrapper[4754]: E0105 20:24:46.098818 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c457q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-jcntk_openstack-operators(82f028d6-51a7-461a-ae7d-cd2da5f47afb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:46 crc kubenswrapper[4754]: E0105 20:24:46.100280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" podUID="82f028d6-51a7-461a-ae7d-cd2da5f47afb" Jan 05 20:24:46 crc kubenswrapper[4754]: E0105 20:24:46.892813 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" podUID="82f028d6-51a7-461a-ae7d-cd2da5f47afb" Jan 05 20:24:51 crc kubenswrapper[4754]: E0105 20:24:51.236319 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Jan 05 20:24:51 crc kubenswrapper[4754]: E0105 20:24:51.236832 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7brs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-69j2s_openstack-operators(983e4f4a-fe90-4460-ad97-b6955a888933): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:51 crc kubenswrapper[4754]: E0105 20:24:51.238051 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podUID="983e4f4a-fe90-4460-ad97-b6955a888933" Jan 05 20:24:51 crc kubenswrapper[4754]: E0105 20:24:51.950545 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podUID="983e4f4a-fe90-4460-ad97-b6955a888933" Jan 05 20:24:52 crc kubenswrapper[4754]: E0105 20:24:52.009449 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 05 20:24:52 crc kubenswrapper[4754]: E0105 20:24:52.009687 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvt6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-hdh6j_openstack-operators(83cc207a-0725-4775-b2f7-93c71985ba1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:52 crc kubenswrapper[4754]: E0105 20:24:52.011061 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" podUID="83cc207a-0725-4775-b2f7-93c71985ba1e" Jan 05 20:24:52 crc kubenswrapper[4754]: E0105 20:24:52.970899 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" podUID="83cc207a-0725-4775-b2f7-93c71985ba1e" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.180546 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.181240 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zs67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7b549fc966-lrhk6_openstack-operators(92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.182569 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.834038 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.834666 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xwq9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-944qj_openstack-operators(4d09717a-7822-46ae-8192-62aa7305304b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:57 crc kubenswrapper[4754]: E0105 20:24:57.835939 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.605950 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.606606 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwghl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vfsjd_openstack-operators(29d3a96c-7dee-4a63-945c-3fef7cdcc7e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.607866 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" podUID="29d3a96c-7dee-4a63-945c-3fef7cdcc7e7" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.704326 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:9533fa79d915abe9beaf16e5c08baaa4a197eecd" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.704387 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:9533fa79d915abe9beaf16e5c08baaa4a197eecd" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.704898 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:9533fa79d915abe9beaf16e5c08baaa4a197eecd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktz4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-688488f44f-62gsk_openstack-operators(4b33baa5-64bb-4df7-ac22-925d718f9d60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:24:59 crc kubenswrapper[4754]: E0105 20:24:59.706110 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podUID="4b33baa5-64bb-4df7-ac22-925d718f9d60" Jan 05 20:25:00 crc kubenswrapper[4754]: E0105 20:25:00.052602 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" podUID="29d3a96c-7dee-4a63-945c-3fef7cdcc7e7" Jan 05 20:25:00 crc kubenswrapper[4754]: E0105 20:25:00.052930 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:9533fa79d915abe9beaf16e5c08baaa4a197eecd\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podUID="4b33baa5-64bb-4df7-ac22-925d718f9d60" Jan 05 20:25:00 crc kubenswrapper[4754]: I0105 20:25:00.339598 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv"] Jan 05 20:25:00 crc kubenswrapper[4754]: W0105 20:25:00.348652 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda078215d_9fb5_413f_b542_ca5b3c6fb296.slice/crio-ed46cca9cf048540fddfed87da047e610189302172237efca8969b8796bf46a5 WatchSource:0}: Error finding container ed46cca9cf048540fddfed87da047e610189302172237efca8969b8796bf46a5: Status 404 returned error can't find the container with id ed46cca9cf048540fddfed87da047e610189302172237efca8969b8796bf46a5 Jan 05 20:25:00 crc kubenswrapper[4754]: I0105 20:25:00.475462 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp"] Jan 05 20:25:00 crc kubenswrapper[4754]: W0105 20:25:00.507808 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfed06176_d7ad_4373_84df_204b6fdbf5cf.slice/crio-56d6300830cff7c2076e634ab8b06b260cf5619501f13ffa0ed2a0bd0178a563 WatchSource:0}: Error finding container 56d6300830cff7c2076e634ab8b06b260cf5619501f13ffa0ed2a0bd0178a563: Status 404 returned error can't find the container with id 56d6300830cff7c2076e634ab8b06b260cf5619501f13ffa0ed2a0bd0178a563 Jan 05 20:25:00 crc kubenswrapper[4754]: I0105 20:25:00.550845 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d"] Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.068874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" event={"ID":"dd93e799-6591-41d5-988a-18cc6d8c836d","Type":"ContainerStarted","Data":"b25c05ad220d914d300af45ef6a721af4089545b621dbfb2be14dc7924c61c8b"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.070173 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.086745 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" event={"ID":"83a7e6b7-db24-4f2f-988d-ed13a27a06af","Type":"ContainerStarted","Data":"7289e791757db9c622245644116f5080924dd154b8417d3f3b421f2d99b37244"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.088040 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.099257 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" event={"ID":"6d71c5c9-f75a-475f-880c-d234d43ad7d9","Type":"ContainerStarted","Data":"c0166e8e12d4ab5b79585b0bd871ca8bc3a4da94b180c243681eb616a5f8ac5f"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.100229 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.109179 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podStartSLOduration=3.934517829 podStartE2EDuration="46.109163688s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.728379374 +0000 UTC m=+1144.437563248" lastFinishedPulling="2026-01-05 20:24:59.903025223 +0000 UTC m=+1186.612209107" observedRunningTime="2026-01-05 20:25:01.106704114 +0000 UTC m=+1187.815887998" watchObservedRunningTime="2026-01-05 20:25:01.109163688 +0000 UTC m=+1187.818347562" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.123644 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" event={"ID":"8df02427-4d10-41bb-9798-82cf7b8bca3e","Type":"ContainerStarted","Data":"f3871a2568fdfc53c29662aa73b73d4e3929b580d806eece1d907e492dace215"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.124427 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.130883 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" event={"ID":"736d23ce-6bc0-439b-b1ff-86aad6363c2a","Type":"ContainerStarted","Data":"bc3f73482652d61f3bf679881e65f24b58641be17ecade402595de639494ef5b"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.140024 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" podStartSLOduration=9.501607936 podStartE2EDuration="46.140010006s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.576320244 +0000 UTC m=+1144.285504118" lastFinishedPulling="2026-01-05 20:24:54.214722294 +0000 UTC m=+1180.923906188" observedRunningTime="2026-01-05 20:25:01.135846397 +0000 UTC m=+1187.845030271" watchObservedRunningTime="2026-01-05 20:25:01.140010006 +0000 UTC m=+1187.849193880" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.155527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" event={"ID":"77f4456d-e6a6-466a-a74c-5276e4951784","Type":"ContainerStarted","Data":"3fe1bfe9f98fa4537183d205755d99138021e7c45fe493ece4d2ab1654c7ee6d"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.156888 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.158025 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podStartSLOduration=7.843549221 podStartE2EDuration="46.158004547s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.855336956 +0000 UTC m=+1145.564520830" lastFinishedPulling="2026-01-05 20:24:57.169792242 +0000 UTC m=+1183.878976156" observedRunningTime="2026-01-05 20:25:01.15506795 +0000 UTC m=+1187.864251824" watchObservedRunningTime="2026-01-05 20:25:01.158004547 +0000 UTC m=+1187.867188421" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.165196 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" event={"ID":"2aeeabff-cc4c-49b1-a895-c21ae9d43e3d","Type":"ContainerStarted","Data":"3fc6c0742eb5f1eec08746ae99510c1f7e805b282004ba2ffd921e20c625782c"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.166373 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.203083 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" event={"ID":"0cd346d8-d14a-404e-b2fa-16fc917e6886","Type":"ContainerStarted","Data":"89b780d57f3f9ada6b5c52bb2931bd3337a077dceecc5fb8c4e4af44884daf26"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.209594 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.223972 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" event={"ID":"f1a3a024-3293-4e7b-b1cd-c93c914c190e","Type":"ContainerStarted","Data":"1792176f2c2f6ade4da37f27144092dc251cc279f5bd692065f4130c10ed7ec6"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.224846 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.237837 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podStartSLOduration=5.264292661 podStartE2EDuration="46.237814846s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.929474777 +0000 UTC m=+1145.638658651" lastFinishedPulling="2026-01-05 20:24:59.902996952 +0000 UTC m=+1186.612180836" observedRunningTime="2026-01-05 20:25:01.191602036 +0000 UTC m=+1187.901595502" watchObservedRunningTime="2026-01-05 20:25:01.237814846 +0000 UTC m=+1187.946998720" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.238398 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" event={"ID":"fed06176-d7ad-4373-84df-204b6fdbf5cf","Type":"ContainerStarted","Data":"56d6300830cff7c2076e634ab8b06b260cf5619501f13ffa0ed2a0bd0178a563"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.257637 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" event={"ID":"f289c3c4-ad02-4022-ac22-239133f6c1ca","Type":"ContainerStarted","Data":"4721850aa838ac39285062a21b3d7034411f0293936474f154309cb922c6d7f5"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.258007 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.312459 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" podStartSLOduration=11.105627527 podStartE2EDuration="46.31243284s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.501771522 +0000 UTC m=+1144.210955406" lastFinishedPulling="2026-01-05 20:24:52.708576805 +0000 UTC m=+1179.417760719" observedRunningTime="2026-01-05 20:25:01.250030166 +0000 UTC m=+1187.959214040" watchObservedRunningTime="2026-01-05 20:25:01.31243284 +0000 UTC m=+1188.021616714" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.365727 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" event={"ID":"db5f9ab8-2422-439c-a857-23f918cfa919","Type":"ContainerStarted","Data":"98915f5c1c9eba2d0227e869ca4df71966abb84f11826d2ef120f1bc18ab5bc7"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.367047 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.376716 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podStartSLOduration=11.028781455 podStartE2EDuration="46.376676391s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.86884381 +0000 UTC m=+1145.578027674" lastFinishedPulling="2026-01-05 20:24:54.216738726 +0000 UTC m=+1180.925922610" observedRunningTime="2026-01-05 20:25:01.318699874 +0000 UTC m=+1188.027883748" watchObservedRunningTime="2026-01-05 20:25:01.376676391 +0000 UTC m=+1188.085860265" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.398507 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" event={"ID":"a078215d-9fb5-413f-b542-ca5b3c6fb296","Type":"ContainerStarted","Data":"cb5f702e97b13ecd5d425cbda1798976a5fac445b3b8187b8b546fc975f0c619"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.398717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" event={"ID":"a078215d-9fb5-413f-b542-ca5b3c6fb296","Type":"ContainerStarted","Data":"ed46cca9cf048540fddfed87da047e610189302172237efca8969b8796bf46a5"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.399580 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.417613 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podStartSLOduration=4.0189665 podStartE2EDuration="46.417576952s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.506621739 +0000 UTC m=+1144.215805623" lastFinishedPulling="2026-01-05 20:24:59.905232191 +0000 UTC m=+1186.614416075" observedRunningTime="2026-01-05 20:25:01.367512981 +0000 UTC m=+1188.076696845" watchObservedRunningTime="2026-01-05 20:25:01.417576952 +0000 UTC m=+1188.126760816" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.420617 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podStartSLOduration=4.37270686 podStartE2EDuration="46.420608891s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.74082654 +0000 UTC m=+1144.450010414" lastFinishedPulling="2026-01-05 20:24:59.788728531 +0000 UTC m=+1186.497912445" observedRunningTime="2026-01-05 20:25:01.409220503 +0000 UTC m=+1188.118404377" watchObservedRunningTime="2026-01-05 20:25:01.420608891 +0000 UTC m=+1188.129792765" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.424663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" event={"ID":"a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a","Type":"ContainerStarted","Data":"560069ea2934d56bb0918b7aec8f9a3d5b11bcaf01fbb2d2cd5f7fe254427969"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.425536 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.468824 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" event={"ID":"91877573-8199-4055-988f-96bd6469af4f","Type":"ContainerStarted","Data":"a8b1ee1978625379e7c408134dad2e0d1d5a9f2c89c14cdb42ed4bacac39a1a9"} Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.469920 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.495515 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podStartSLOduration=5.562329703 podStartE2EDuration="46.495469981s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.855549162 +0000 UTC m=+1145.564733036" lastFinishedPulling="2026-01-05 20:24:59.78868944 +0000 UTC m=+1186.497873314" observedRunningTime="2026-01-05 20:25:01.455632208 +0000 UTC m=+1188.164816082" watchObservedRunningTime="2026-01-05 20:25:01.495469981 +0000 UTC m=+1188.204653855" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.582299 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podStartSLOduration=45.582266853 podStartE2EDuration="45.582266853s" podCreationTimestamp="2026-01-05 20:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:25:01.568089542 +0000 UTC m=+1188.277273416" watchObservedRunningTime="2026-01-05 20:25:01.582266853 +0000 UTC m=+1188.291450717" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.623275 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podStartSLOduration=5.611971802 podStartE2EDuration="46.623255596s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.871574281 +0000 UTC m=+1145.580758155" lastFinishedPulling="2026-01-05 20:24:59.882858085 +0000 UTC m=+1186.592041949" observedRunningTime="2026-01-05 20:25:01.617730082 +0000 UTC m=+1188.326913956" watchObservedRunningTime="2026-01-05 20:25:01.623255596 +0000 UTC m=+1188.332439460" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.644691 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podStartSLOduration=5.569522431 podStartE2EDuration="46.644672567s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.806357904 +0000 UTC m=+1145.515541778" lastFinishedPulling="2026-01-05 20:24:59.88150803 +0000 UTC m=+1186.590691914" observedRunningTime="2026-01-05 20:25:01.641544745 +0000 UTC m=+1188.350728619" watchObservedRunningTime="2026-01-05 20:25:01.644672567 +0000 UTC m=+1188.353856441" Jan 05 20:25:01 crc kubenswrapper[4754]: I0105 20:25:01.673938 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podStartSLOduration=5.742697005 podStartE2EDuration="46.673920913s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.968310694 +0000 UTC m=+1145.677494568" lastFinishedPulling="2026-01-05 20:24:59.899534592 +0000 UTC m=+1186.608718476" observedRunningTime="2026-01-05 20:25:01.671211332 +0000 UTC m=+1188.380395206" watchObservedRunningTime="2026-01-05 20:25:01.673920913 +0000 UTC m=+1188.383104787" Jan 05 20:25:02 crc kubenswrapper[4754]: I0105 20:25:02.493171 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" event={"ID":"1f664632-a6e1-491d-b0cf-be1717a6d28b","Type":"ContainerStarted","Data":"ed45f12aabd770e20db5bc2ee9b22e82e5c12253fd1e6dbc889bf6521f9ed742"} Jan 05 20:25:02 crc kubenswrapper[4754]: I0105 20:25:02.532778 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podStartSLOduration=3.890747733 podStartE2EDuration="47.532740175s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:17.523735177 +0000 UTC m=+1144.232919051" lastFinishedPulling="2026-01-05 20:25:01.165727619 +0000 UTC m=+1187.874911493" observedRunningTime="2026-01-05 20:25:02.520473364 +0000 UTC m=+1189.229657238" watchObservedRunningTime="2026-01-05 20:25:02.532740175 +0000 UTC m=+1189.241924049" Jan 05 20:25:03 crc kubenswrapper[4754]: I0105 20:25:03.500836 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" event={"ID":"82f028d6-51a7-461a-ae7d-cd2da5f47afb","Type":"ContainerStarted","Data":"24624dc865f23b6f5bc7df4177f1d92b1f1c3263b21130db4224d697e905c692"} Jan 05 20:25:03 crc kubenswrapper[4754]: I0105 20:25:03.520693 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" podStartSLOduration=5.015157898 podStartE2EDuration="48.520672697s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.807185746 +0000 UTC m=+1145.516369620" lastFinishedPulling="2026-01-05 20:25:02.312700545 +0000 UTC m=+1189.021884419" observedRunningTime="2026-01-05 20:25:03.517057322 +0000 UTC m=+1190.226241196" watchObservedRunningTime="2026-01-05 20:25:03.520672697 +0000 UTC m=+1190.229856581" Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.522671 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" event={"ID":"fed06176-d7ad-4373-84df-204b6fdbf5cf","Type":"ContainerStarted","Data":"536cba47c873bc028f6648e962165f695729ebcbe96069f72858d8758fcd080b"} Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.523623 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.525600 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" event={"ID":"736d23ce-6bc0-439b-b1ff-86aad6363c2a","Type":"ContainerStarted","Data":"4a99c4667c62c4980348e28c5bf9ac8f03a478df5265ad77f3cf067934916134"} Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.525836 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.555419 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" podStartSLOduration=45.843806941 podStartE2EDuration="50.555386683s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:25:00.513259478 +0000 UTC m=+1187.222443342" lastFinishedPulling="2026-01-05 20:25:05.22483921 +0000 UTC m=+1191.934023084" observedRunningTime="2026-01-05 20:25:05.552240691 +0000 UTC m=+1192.261424565" watchObservedRunningTime="2026-01-05 20:25:05.555386683 +0000 UTC m=+1192.264570597" Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.987069 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" Jan 05 20:25:05 crc kubenswrapper[4754]: I0105 20:25:05.988523 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.010273 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" podStartSLOduration=46.369251588 podStartE2EDuration="51.010250641s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:25:00.588033526 +0000 UTC m=+1187.297217400" lastFinishedPulling="2026-01-05 20:25:05.229032539 +0000 UTC m=+1191.938216453" observedRunningTime="2026-01-05 20:25:05.580258274 +0000 UTC m=+1192.289442188" watchObservedRunningTime="2026-01-05 20:25:06.010250641 +0000 UTC m=+1192.719434515" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.064985 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.078042 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.099461 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.117782 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.228687 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.268690 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.285517 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.292664 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.307243 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.369395 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.374275 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.410762 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 20:25:06 crc kubenswrapper[4754]: I0105 20:25:06.507088 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.570794 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" event={"ID":"83cc207a-0725-4775-b2f7-93c71985ba1e","Type":"ContainerStarted","Data":"a065810edd7a850ba5d2c09ae4c596683936333d62fd02aaf78f68cdcb8aea68"} Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.571460 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.572814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" event={"ID":"983e4f4a-fe90-4460-ad97-b6955a888933","Type":"ContainerStarted","Data":"8f097002c9f1cb4f5e7f91dd654d1b087fd4eaf0917a1610b2b121066aeb040d"} Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.573120 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:25:08 crc kubenswrapper[4754]: E0105 20:25:08.590137 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:19345236c6b6bd5ae772e336fa6065c6e94c8990d1bf05d30073ddb95ffffb4d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.599817 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" podStartSLOduration=4.161734667 podStartE2EDuration="53.59979425s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.85814622 +0000 UTC m=+1145.567330094" lastFinishedPulling="2026-01-05 20:25:08.296205803 +0000 UTC m=+1195.005389677" observedRunningTime="2026-01-05 20:25:08.591723499 +0000 UTC m=+1195.300907393" watchObservedRunningTime="2026-01-05 20:25:08.59979425 +0000 UTC m=+1195.308978124" Jan 05 20:25:08 crc kubenswrapper[4754]: I0105 20:25:08.625745 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podStartSLOduration=4.313824369 podStartE2EDuration="53.625720079s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.80658569 +0000 UTC m=+1145.515769564" lastFinishedPulling="2026-01-05 20:25:08.11848138 +0000 UTC m=+1194.827665274" observedRunningTime="2026-01-05 20:25:08.620060981 +0000 UTC m=+1195.329244865" watchObservedRunningTime="2026-01-05 20:25:08.625720079 +0000 UTC m=+1195.334903993" Jan 05 20:25:09 crc kubenswrapper[4754]: E0105 20:25:09.592533 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" Jan 05 20:25:11 crc kubenswrapper[4754]: I0105 20:25:11.665170 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 20:25:12 crc kubenswrapper[4754]: I0105 20:25:12.160072 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 20:25:12 crc kubenswrapper[4754]: I0105 20:25:12.226080 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 20:25:12 crc kubenswrapper[4754]: I0105 20:25:12.615021 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" event={"ID":"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7","Type":"ContainerStarted","Data":"63537bfe51d55a5b54eb18944d0ef8460a8a5940f54d6d4aa803ec1032e52ae7"} Jan 05 20:25:12 crc kubenswrapper[4754]: I0105 20:25:12.641009 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" podStartSLOduration=3.426534549 podStartE2EDuration="56.640981181s" podCreationTimestamp="2026-01-05 20:24:16 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.880923226 +0000 UTC m=+1145.590107100" lastFinishedPulling="2026-01-05 20:25:12.095369848 +0000 UTC m=+1198.804553732" observedRunningTime="2026-01-05 20:25:12.636789742 +0000 UTC m=+1199.345973656" watchObservedRunningTime="2026-01-05 20:25:12.640981181 +0000 UTC m=+1199.350165095" Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.245207 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.280469 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.372937 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.690400 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" event={"ID":"4b33baa5-64bb-4df7-ac22-925d718f9d60","Type":"ContainerStarted","Data":"6d0d72f1d3fbf867594c5e7178177695f6e11bd2b57c31f8205de7ce2d3e981e"} Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.690593 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:25:16 crc kubenswrapper[4754]: I0105 20:25:16.739075 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podStartSLOduration=4.936534631 podStartE2EDuration="1m1.739055342s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.854996998 +0000 UTC m=+1145.564180872" lastFinishedPulling="2026-01-05 20:25:15.657517699 +0000 UTC m=+1202.366701583" observedRunningTime="2026-01-05 20:25:16.736142376 +0000 UTC m=+1203.445326250" watchObservedRunningTime="2026-01-05 20:25:16.739055342 +0000 UTC m=+1203.448239216" Jan 05 20:25:21 crc kubenswrapper[4754]: I0105 20:25:21.743750 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" event={"ID":"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f","Type":"ContainerStarted","Data":"d8011ec926523491c5e9b357df426940fc1e414b8809462800a37598c11515e4"} Jan 05 20:25:21 crc kubenswrapper[4754]: I0105 20:25:21.744914 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:25:21 crc kubenswrapper[4754]: I0105 20:25:21.770468 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podStartSLOduration=4.579010322 podStartE2EDuration="1m6.770443515s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.940916777 +0000 UTC m=+1145.650100651" lastFinishedPulling="2026-01-05 20:25:21.13234994 +0000 UTC m=+1207.841533844" observedRunningTime="2026-01-05 20:25:21.768129834 +0000 UTC m=+1208.477313748" watchObservedRunningTime="2026-01-05 20:25:21.770443515 +0000 UTC m=+1208.479627429" Jan 05 20:25:23 crc kubenswrapper[4754]: I0105 20:25:23.770417 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" event={"ID":"4d09717a-7822-46ae-8192-62aa7305304b","Type":"ContainerStarted","Data":"aa3b9c76564db737b1a5def2e85c6632cf4433e4a91eb2dec392320b0bf2f098"} Jan 05 20:25:23 crc kubenswrapper[4754]: I0105 20:25:23.771278 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:25:23 crc kubenswrapper[4754]: I0105 20:25:23.801023 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podStartSLOduration=4.371935431 podStartE2EDuration="1m8.801000302s" podCreationTimestamp="2026-01-05 20:24:15 +0000 UTC" firstStartedPulling="2026-01-05 20:24:18.955045757 +0000 UTC m=+1145.664229631" lastFinishedPulling="2026-01-05 20:25:23.384110628 +0000 UTC m=+1210.093294502" observedRunningTime="2026-01-05 20:25:23.792798007 +0000 UTC m=+1210.501981921" watchObservedRunningTime="2026-01-05 20:25:23.801000302 +0000 UTC m=+1210.510184196" Jan 05 20:25:26 crc kubenswrapper[4754]: I0105 20:25:26.128003 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 20:25:26 crc kubenswrapper[4754]: I0105 20:25:26.425450 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" Jan 05 20:25:36 crc kubenswrapper[4754]: I0105 20:25:36.443369 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.750178 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.755666 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.758924 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.759130 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4wvln" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.760243 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.760870 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.773387 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.807580 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4slht\" (UniqueName: \"kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.807631 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.830912 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.832347 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.835409 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.854559 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.909463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4slht\" (UniqueName: \"kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.910412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.911438 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:52 crc kubenswrapper[4754]: I0105 20:25:52.938002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4slht\" (UniqueName: \"kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht\") pod \"dnsmasq-dns-675f4bcbfc-rw7pc\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.018702 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.018750 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.018819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxpj\" (UniqueName: \"kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.079824 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.120390 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.120433 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.120480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxpj\" (UniqueName: \"kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.121179 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.121274 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.139539 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxpj\" (UniqueName: \"kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj\") pod \"dnsmasq-dns-78dd6ddcc-bz48l\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.148829 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.575987 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:25:53 crc kubenswrapper[4754]: W0105 20:25:53.653481 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44ed7873_e455_4135_a0b8_5a66a97d957d.slice/crio-ba29f46b213707ec1251dd6587251e460efd4afb7e6e48d0628f14357abd0e55 WatchSource:0}: Error finding container ba29f46b213707ec1251dd6587251e460efd4afb7e6e48d0628f14357abd0e55: Status 404 returned error can't find the container with id ba29f46b213707ec1251dd6587251e460efd4afb7e6e48d0628f14357abd0e55 Jan 05 20:25:53 crc kubenswrapper[4754]: I0105 20:25:53.659735 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:25:54 crc kubenswrapper[4754]: I0105 20:25:54.128131 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" event={"ID":"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb","Type":"ContainerStarted","Data":"28f9ff51a56c669290588619f4bc22d1e1e59ac50a883969d84aea65901217ac"} Jan 05 20:25:54 crc kubenswrapper[4754]: I0105 20:25:54.129754 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" event={"ID":"44ed7873-e455-4135-a0b8-5a66a97d957d","Type":"ContainerStarted","Data":"ba29f46b213707ec1251dd6587251e460efd4afb7e6e48d0628f14357abd0e55"} Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.677887 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.697278 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.704399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.722775 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.878900 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.878970 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpwm\" (UniqueName: \"kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.879038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.980151 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.980209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpwm\" (UniqueName: \"kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.980265 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.981104 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.981685 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:55 crc kubenswrapper[4754]: I0105 20:25:55.996342 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.012282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpwm\" (UniqueName: \"kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm\") pod \"dnsmasq-dns-666b6646f7-w6b8n\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.043558 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.055433 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.072634 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.081374 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.186924 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xg2\" (UniqueName: \"kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.187040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.187065 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.288496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.288930 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.289017 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xg2\" (UniqueName: \"kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.289595 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.289939 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.320868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xg2\" (UniqueName: \"kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2\") pod \"dnsmasq-dns-57d769cc4f-76z9q\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.414271 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.630229 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.856390 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.863718 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.869766 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.870206 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.870535 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.870757 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.870932 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.871061 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k8gd4" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.871191 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.871282 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.885390 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.887414 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.898402 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.907470 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.908156 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.921805 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:25:56 crc kubenswrapper[4754]: I0105 20:25:56.943358 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007273 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007338 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007360 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007389 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007415 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007458 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007491 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007537 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007555 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007575 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007602 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007623 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007645 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007680 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007699 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007729 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007756 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trsj\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007797 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007817 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007833 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007852 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007867 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007889 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007908 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx79f\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.007985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt7t6\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.008005 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.008021 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.111286 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112449 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112484 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112512 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112568 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112618 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112693 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112713 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112756 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112813 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112840 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112874 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112904 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112920 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.112942 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113096 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trsj\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113156 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113173 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113218 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113247 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113280 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113325 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx79f\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113432 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt7t6\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113470 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.113488 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.116170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.116586 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.117206 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.117933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.118017 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.118267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.118465 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.118540 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.118849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.119132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.119881 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.120124 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.120965 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.120990 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.124085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.125262 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.125438 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.135048 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.137855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.150339 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.150914 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.151523 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.153875 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.156900 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.157960 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.166945 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt7t6\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.171484 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.172232 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trsj\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.173112 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.173755 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx79f\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.248147 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.259719 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.261581 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.261640 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bddf136c5148c00231b8778925e81fb3b19f5c884c1d09dc7062258446bf5b2f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.261698 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.261739 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cc29d0fd818fb70d50c682ea0203e481751a2a06fff7877ad146f175aef014f4/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.262398 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.262437 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d6460ae131bef567a721ac9ced0be549876bcabc22a363ca8bce7a527cb91439/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.265574 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" event={"ID":"d3723fc2-f882-421a-acce-3cc120ccdf2b","Type":"ContainerStarted","Data":"71f6a9703969fe8b4148a1c9e02c5d8b5f4cd5c4e024930b2798454c62092b8b"} Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.270338 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.270562 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.270678 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.270776 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.270876 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2d8nv" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.271047 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.274421 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" event={"ID":"e5d523be-6b64-4c2b-843e-fdcd6310bff4","Type":"ContainerStarted","Data":"2051461316af028744e30674c69649b070212ba72a268cc2958b072d84fa953f"} Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.281252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.287750 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.329775 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330216 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330247 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330369 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330474 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330501 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.330641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njdp\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.418326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432506 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432566 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432609 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432633 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432701 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njdp\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432751 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432770 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432796 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432840 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.432864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.433448 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.435967 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.437114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.437178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.437944 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.438484 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.438802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.439187 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.441845 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.442404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.442788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.450487 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.450518 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/beb702a1bac8a1e4b135b92121e9b2f89f2a81cf17aea314f0809e88e0e10ffd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.453025 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njdp\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.496118 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.514548 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.527390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.538151 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:25:57 crc kubenswrapper[4754]: I0105 20:25:57.606108 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.179212 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:25:58 crc kubenswrapper[4754]: W0105 20:25:58.202524 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefbb8662_26f9_44ac_aba4_ecc1fc6a4d58.slice/crio-af88a6a54804b8611827b356f77e7d6807fbbb85701ec441949be10b8a835730 WatchSource:0}: Error finding container af88a6a54804b8611827b356f77e7d6807fbbb85701ec441949be10b8a835730: Status 404 returned error can't find the container with id af88a6a54804b8611827b356f77e7d6807fbbb85701ec441949be10b8a835730 Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.320078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerStarted","Data":"af88a6a54804b8611827b356f77e7d6807fbbb85701ec441949be10b8a835730"} Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.433156 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.436615 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.441107 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.441764 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-88dv9" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.441906 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.442029 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.444780 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.446675 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.562410 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.562468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.562510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.562657 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.562894 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.563233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.563444 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd24\" (UniqueName: \"kubernetes.io/projected/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kube-api-access-tmd24\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.563781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.622591 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665249 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665333 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665389 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665471 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665488 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmd24\" (UniqueName: \"kubernetes.io/projected/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kube-api-access-tmd24\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665541 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.665556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.666374 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.667785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.673031 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.680426 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.681458 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.681489 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4722e250a6045f3a27b48e30fa47caddea56150abf1b3f856a531d8b4ffc1f9b/globalmount\"" pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.682089 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.684429 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.691056 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.698853 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmd24\" (UniqueName: \"kubernetes.io/projected/9af784f4-79c9-4422-bc62-a2c49c9bb7cc-kube-api-access-tmd24\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.705125 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:25:58 crc kubenswrapper[4754]: W0105 20:25:58.768571 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd9c022_23d9_484d_bc2b_dddf74e5e3f9.slice/crio-c0f682775338f75427619b23aca1a7442be4cb48fc973b09076885d051d840a3 WatchSource:0}: Error finding container c0f682775338f75427619b23aca1a7442be4cb48fc973b09076885d051d840a3: Status 404 returned error can't find the container with id c0f682775338f75427619b23aca1a7442be4cb48fc973b09076885d051d840a3 Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.771060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66ab456a-c2cc-4ae1-b8af-1b9de9931209\") pod \"openstack-galera-0\" (UID: \"9af784f4-79c9-4422-bc62-a2c49c9bb7cc\") " pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: I0105 20:25:58.780852 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 20:25:58 crc kubenswrapper[4754]: W0105 20:25:58.830582 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod909351bf_3608_40e6_9f93_bffa1ed74945.slice/crio-6fb20690086c22ffc422e7cea87d901e41e2a6385a933e90ef1fbb724ba6a7ad WatchSource:0}: Error finding container 6fb20690086c22ffc422e7cea87d901e41e2a6385a933e90ef1fbb724ba6a7ad: Status 404 returned error can't find the container with id 6fb20690086c22ffc422e7cea87d901e41e2a6385a933e90ef1fbb724ba6a7ad Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.334764 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerStarted","Data":"6fb20690086c22ffc422e7cea87d901e41e2a6385a933e90ef1fbb724ba6a7ad"} Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.336579 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerStarted","Data":"5a4be78a9f38f0affd5d6693b41466cd4534b932e6f4d08e41c14ef0c26916be"} Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.338599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerStarted","Data":"c0f682775338f75427619b23aca1a7442be4cb48fc973b09076885d051d840a3"} Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.359280 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.733407 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.735821 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.749044 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.751758 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lbd2f" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.752004 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.755762 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.755908 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797255 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797308 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797347 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48lg\" (UniqueName: \"kubernetes.io/projected/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kube-api-access-h48lg\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797399 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.797479 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.885818 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.887367 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.892947 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-298mc" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.893320 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.901814 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.901998 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.902084 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.902170 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48lg\" (UniqueName: \"kubernetes.io/projected/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kube-api-access-h48lg\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.902255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.902424 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.902608 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.903033 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.907339 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.908324 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.909780 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.925847 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.925933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.931975 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.963505 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48lg\" (UniqueName: \"kubernetes.io/projected/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-kube-api-access-h48lg\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.973053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aaed05-ed65-4414-bf4f-7e5e4cf9966a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:25:59 crc kubenswrapper[4754]: I0105 20:25:59.987946 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.004688 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.005037 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.005139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-config-data\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.005344 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmm7\" (UniqueName: \"kubernetes.io/projected/d0a4f96f-678d-42ec-8302-6a27a4477941-kube-api-access-jvmm7\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.005468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-kolla-config\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.021462 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.021710 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10b7cfa66ad3370c2e225e5ec555998a14fa1c7afcd0b195970178e454978f53/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.114200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.114279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.114329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-config-data\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.114439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmm7\" (UniqueName: \"kubernetes.io/projected/d0a4f96f-678d-42ec-8302-6a27a4477941-kube-api-access-jvmm7\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.114467 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-kolla-config\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.115218 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-kolla-config\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.115347 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0a4f96f-678d-42ec-8302-6a27a4477941-config-data\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.118160 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.118907 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a4f96f-678d-42ec-8302-6a27a4477941-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.147920 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmm7\" (UniqueName: \"kubernetes.io/projected/d0a4f96f-678d-42ec-8302-6a27a4477941-kube-api-access-jvmm7\") pod \"memcached-0\" (UID: \"d0a4f96f-678d-42ec-8302-6a27a4477941\") " pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.168374 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c51a607-ed50-4e9e-be4d-54051065f6f9\") pod \"openstack-cell1-galera-0\" (UID: \"81aaed05-ed65-4414-bf4f-7e5e4cf9966a\") " pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.340436 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 20:26:00 crc kubenswrapper[4754]: I0105 20:26:00.386539 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.792867 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.794555 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.812984 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.819157 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xz59b" Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.879470 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwqz\" (UniqueName: \"kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz\") pod \"kube-state-metrics-0\" (UID: \"b658733a-9c58-4287-82a3-b49d21e53e53\") " pod="openstack/kube-state-metrics-0" Jan 05 20:26:01 crc kubenswrapper[4754]: I0105 20:26:01.983842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwwqz\" (UniqueName: \"kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz\") pod \"kube-state-metrics-0\" (UID: \"b658733a-9c58-4287-82a3-b49d21e53e53\") " pod="openstack/kube-state-metrics-0" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.041270 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwwqz\" (UniqueName: \"kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz\") pod \"kube-state-metrics-0\" (UID: \"b658733a-9c58-4287-82a3-b49d21e53e53\") " pod="openstack/kube-state-metrics-0" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.181367 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.771720 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw"] Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.775099 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.784969 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-jf667" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.786155 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw"] Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.797743 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.849199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljzz\" (UniqueName: \"kubernetes.io/projected/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-kube-api-access-gljzz\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.849267 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.958600 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljzz\" (UniqueName: \"kubernetes.io/projected/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-kube-api-access-gljzz\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.958666 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.985149 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljzz\" (UniqueName: \"kubernetes.io/projected/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-kube-api-access-gljzz\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:02 crc kubenswrapper[4754]: I0105 20:26:02.986250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gf6gw\" (UID: \"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.168183 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f8c5cd5c9-2vcvq"] Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.178118 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.184092 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f8c5cd5c9-2vcvq"] Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.186171 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.269809 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dbt\" (UniqueName: \"kubernetes.io/projected/b9e545f1-ae96-4643-9412-10ada69b1b72-kube-api-access-r2dbt\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.269876 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-oauth-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.269934 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-trusted-ca-bundle\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.269956 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-console-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.269993 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-service-ca\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.270036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-oauth-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.270060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.373957 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-console-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.374058 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-service-ca\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.374754 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-oauth-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.374800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.374924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2dbt\" (UniqueName: \"kubernetes.io/projected/b9e545f1-ae96-4643-9412-10ada69b1b72-kube-api-access-r2dbt\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.374965 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-oauth-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.375025 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-trusted-ca-bundle\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.375327 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-console-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.375883 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-service-ca\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.376313 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-oauth-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.380526 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9e545f1-ae96-4643-9412-10ada69b1b72-trusted-ca-bundle\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.389259 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-oauth-config\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.395367 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2dbt\" (UniqueName: \"kubernetes.io/projected/b9e545f1-ae96-4643-9412-10ada69b1b72-kube-api-access-r2dbt\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.403767 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e545f1-ae96-4643-9412-10ada69b1b72-console-serving-cert\") pod \"console-6f8c5cd5c9-2vcvq\" (UID: \"b9e545f1-ae96-4643-9412-10ada69b1b72\") " pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.412392 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.417328 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.420586 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-w785j" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.420969 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.421087 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.422351 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.421123 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.421141 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.421275 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.422741 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.433977 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.580935 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581005 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581052 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581080 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tv69\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581121 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581151 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581174 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581225 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.581330 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.606189 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685157 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685522 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685609 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685740 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685836 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685906 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.685982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tv69\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.686101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.686187 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.686260 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.687319 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.687333 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.693958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.695527 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.696083 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/22e5596610b592b729e0b507ea995333fbeb2dec5ee98d2efedb79ca7d9a4cc2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.700415 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.702447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.702724 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.708911 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.720237 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.735083 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tv69\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.797886 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:03 crc kubenswrapper[4754]: I0105 20:26:03.842781 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.966447 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.969457 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.978087 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.978607 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7x28n" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.978839 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.978952 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.979210 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 05 20:26:05 crc kubenswrapper[4754]: I0105 20:26:05.980886 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.060675 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pbz4n"] Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.062720 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.065491 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qcwmq" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.065828 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.068467 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.079633 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-psnm7"] Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.085430 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.095281 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n"] Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.114960 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-psnm7"] Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171086 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171118 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171140 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-scripts\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171159 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kk4\" (UniqueName: \"kubernetes.io/projected/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-kube-api-access-88kk4\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171183 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171216 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171286 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86f9t\" (UniqueName: \"kubernetes.io/projected/43b2550d-6f62-42ff-8b14-20d95a9a4652-kube-api-access-86f9t\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171306 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-log-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171325 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-config\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171395 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49998392-b504-402d-b62f-0ab60162d1ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49998392-b504-402d-b62f-0ab60162d1ef\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171423 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-combined-ca-bundle\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171462 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-ovn-controller-tls-certs\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.171485 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273675 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-ovn-controller-tls-certs\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273747 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273791 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-run\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273826 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273911 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-scripts\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273933 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kk4\" (UniqueName: \"kubernetes.io/projected/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-kube-api-access-88kk4\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.273984 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-lib\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274017 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-log\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274041 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274081 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c33325d3-6a5d-4d13-b2c6-ae62a01904df-scripts\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274120 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w79s9\" (UniqueName: \"kubernetes.io/projected/c33325d3-6a5d-4d13-b2c6-ae62a01904df-kube-api-access-w79s9\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274176 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86f9t\" (UniqueName: \"kubernetes.io/projected/43b2550d-6f62-42ff-8b14-20d95a9a4652-kube-api-access-86f9t\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274199 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-log-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274221 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274255 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-etc-ovs\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274284 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-config\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274313 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49998392-b504-402d-b62f-0ab60162d1ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49998392-b504-402d-b62f-0ab60162d1ef\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.274362 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-combined-ca-bundle\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.275789 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.276930 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.278652 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-log-ovn\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.278804 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-var-run\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.280995 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-config\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.284358 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.285950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-combined-ca-bundle\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.286001 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.286036 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49998392-b504-402d-b62f-0ab60162d1ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49998392-b504-402d-b62f-0ab60162d1ef\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89106f29e5e4c82e7a67f34aa29efbad0ec88cf5ff28319ebb2ec9847d314b0f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.286193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.286568 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b2550d-6f62-42ff-8b14-20d95a9a4652-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.290974 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-scripts\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.294971 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-ovn-controller-tls-certs\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.296788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kk4\" (UniqueName: \"kubernetes.io/projected/5212dab3-1e5c-48b6-a710-f3551ab2ceaf-kube-api-access-88kk4\") pod \"ovn-controller-pbz4n\" (UID: \"5212dab3-1e5c-48b6-a710-f3551ab2ceaf\") " pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.300141 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b2550d-6f62-42ff-8b14-20d95a9a4652-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.305612 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86f9t\" (UniqueName: \"kubernetes.io/projected/43b2550d-6f62-42ff-8b14-20d95a9a4652-kube-api-access-86f9t\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.339709 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49998392-b504-402d-b62f-0ab60162d1ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49998392-b504-402d-b62f-0ab60162d1ef\") pod \"ovsdbserver-nb-0\" (UID: \"43b2550d-6f62-42ff-8b14-20d95a9a4652\") " pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376634 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-run\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-lib\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376729 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-log\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c33325d3-6a5d-4d13-b2c6-ae62a01904df-scripts\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376787 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w79s9\" (UniqueName: \"kubernetes.io/projected/c33325d3-6a5d-4d13-b2c6-ae62a01904df-kube-api-access-w79s9\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376850 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-etc-ovs\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.376922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-run\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.377070 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-log\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.377103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-etc-ovs\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.377192 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c33325d3-6a5d-4d13-b2c6-ae62a01904df-var-lib\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.378750 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c33325d3-6a5d-4d13-b2c6-ae62a01904df-scripts\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.387660 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.394933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w79s9\" (UniqueName: \"kubernetes.io/projected/c33325d3-6a5d-4d13-b2c6-ae62a01904df-kube-api-access-w79s9\") pod \"ovn-controller-ovs-psnm7\" (UID: \"c33325d3-6a5d-4d13-b2c6-ae62a01904df\") " pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.413804 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:06 crc kubenswrapper[4754]: I0105 20:26:06.613801 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:07 crc kubenswrapper[4754]: I0105 20:26:07.481691 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerStarted","Data":"946a6c6342e2d5b466d19110b50e61e1ac30b467ba215cdd0e561414e4eb8031"} Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.975951 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.979061 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.982503 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.987938 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.988219 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 05 20:26:08 crc kubenswrapper[4754]: I0105 20:26:08.988956 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9q2fz" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.012013 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142633 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142723 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142760 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142790 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbx2\" (UniqueName: \"kubernetes.io/projected/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-kube-api-access-2rbx2\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142894 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142920 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.142941 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.244981 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245034 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245055 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245149 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245173 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245196 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245221 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rbx2\" (UniqueName: \"kubernetes.io/projected/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-kube-api-access-2rbx2\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.245244 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.247048 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.247177 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.247914 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.253627 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.253830 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.253877 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d9489c117c0c0b18a8917942e598be18c1dc202fb90438f1c7bfa81978f6cd1/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.259651 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.267369 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.270335 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rbx2\" (UniqueName: \"kubernetes.io/projected/f4f6ded3-ca17-4343-a4ee-15df3c64d1c0-kube-api-access-2rbx2\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.305602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fcb81f8-169f-4df2-8877-e4b3aa4d82a8\") pod \"ovsdbserver-sb-0\" (UID: \"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0\") " pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:09 crc kubenswrapper[4754]: I0105 20:26:09.605896 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.468085 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.469256 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4slht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rw7pc_openstack(c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.470892 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" podUID="c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.524840 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.525011 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khpwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-w6b8n_openstack(d3723fc2-f882-421a-acce-3cc120ccdf2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.526223 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" podUID="d3723fc2-f882-421a-acce-3cc120ccdf2b" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.587557 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.587724 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6xg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-76z9q_openstack(e5d523be-6b64-4c2b-843e-fdcd6310bff4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.590310 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" podUID="e5d523be-6b64-4c2b-843e-fdcd6310bff4" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.590366 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.590447 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pxpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-bz48l_openstack(44ed7873-e455-4135-a0b8-5a66a97d957d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.593647 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" podUID="44ed7873-e455-4135-a0b8-5a66a97d957d" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.674767 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" podUID="d3723fc2-f882-421a-acce-3cc120ccdf2b" Jan 05 20:26:22 crc kubenswrapper[4754]: E0105 20:26:22.703108 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" podUID="e5d523be-6b64-4c2b-843e-fdcd6310bff4" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.370690 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.434787 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config\") pod \"44ed7873-e455-4135-a0b8-5a66a97d957d\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.435185 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc\") pod \"44ed7873-e455-4135-a0b8-5a66a97d957d\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.435229 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pxpj\" (UniqueName: \"kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj\") pod \"44ed7873-e455-4135-a0b8-5a66a97d957d\" (UID: \"44ed7873-e455-4135-a0b8-5a66a97d957d\") " Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.436130 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config" (OuterVolumeSpecName: "config") pod "44ed7873-e455-4135-a0b8-5a66a97d957d" (UID: "44ed7873-e455-4135-a0b8-5a66a97d957d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.436410 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44ed7873-e455-4135-a0b8-5a66a97d957d" (UID: "44ed7873-e455-4135-a0b8-5a66a97d957d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.449948 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj" (OuterVolumeSpecName: "kube-api-access-8pxpj") pod "44ed7873-e455-4135-a0b8-5a66a97d957d" (UID: "44ed7873-e455-4135-a0b8-5a66a97d957d"). InnerVolumeSpecName "kube-api-access-8pxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.460383 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw"] Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.494964 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.537043 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.537073 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pxpj\" (UniqueName: \"kubernetes.io/projected/44ed7873-e455-4135-a0b8-5a66a97d957d-kube-api-access-8pxpj\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.537082 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ed7873-e455-4135-a0b8-5a66a97d957d-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.675739 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerStarted","Data":"24a9ff4d123d941af34afc265ca373d308bea42491a26a306bc85e54401436ae"} Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.679764 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" event={"ID":"44ed7873-e455-4135-a0b8-5a66a97d957d","Type":"ContainerDied","Data":"ba29f46b213707ec1251dd6587251e460efd4afb7e6e48d0628f14357abd0e55"} Jan 05 20:26:23 crc kubenswrapper[4754]: I0105 20:26:23.679825 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-bz48l" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.034837 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.062110 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.068370 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-bz48l"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.149281 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config\") pod \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.149753 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4slht\" (UniqueName: \"kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht\") pod \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\" (UID: \"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb\") " Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.153256 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config" (OuterVolumeSpecName: "config") pod "c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb" (UID: "c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.163478 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.180203 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.190148 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.211685 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.263568 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.307217 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht" (OuterVolumeSpecName: "kube-api-access-4slht") pod "c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb" (UID: "c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb"). InnerVolumeSpecName "kube-api-access-4slht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:24 crc kubenswrapper[4754]: W0105 20:26:24.316104 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81aaed05_ed65_4414_bf4f_7e5e4cf9966a.slice/crio-c18470cdd99c95417741da32e5640a469c8ecbc166b7dcaa131fd0b4e34b749d WatchSource:0}: Error finding container c18470cdd99c95417741da32e5640a469c8ecbc166b7dcaa131fd0b4e34b749d: Status 404 returned error can't find the container with id c18470cdd99c95417741da32e5640a469c8ecbc166b7dcaa131fd0b4e34b749d Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.343783 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f8c5cd5c9-2vcvq"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.358282 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-psnm7"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.365165 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4slht\" (UniqueName: \"kubernetes.io/projected/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb-kube-api-access-4slht\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.547841 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.692342 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" event={"ID":"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d","Type":"ContainerStarted","Data":"39fefa130dfe0611058d018c4e4dabcd811cbcb655415eabf2bc7852cac7bdde"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.693667 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n" event={"ID":"5212dab3-1e5c-48b6-a710-f3551ab2ceaf","Type":"ContainerStarted","Data":"71fe7d505a3e0aead4031b19b9cf6746bcbcb12c666118d079a28eb528e6008d"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.694833 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b658733a-9c58-4287-82a3-b49d21e53e53","Type":"ContainerStarted","Data":"db47e05fda33d52656ff10879ce95d4215ab265d0395830c2ecae1bf270af35a"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.697067 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d0a4f96f-678d-42ec-8302-6a27a4477941","Type":"ContainerStarted","Data":"7302c18b95bb33f3b0dc5ca343936688f772375128df61c1ceead8fb5d856f94"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.698143 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" event={"ID":"c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb","Type":"ContainerDied","Data":"28f9ff51a56c669290588619f4bc22d1e1e59ac50a883969d84aea65901217ac"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.698164 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rw7pc" Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.699724 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerStarted","Data":"c18470cdd99c95417741da32e5640a469c8ecbc166b7dcaa131fd0b4e34b749d"} Jan 05 20:26:24 crc kubenswrapper[4754]: I0105 20:26:24.700610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerStarted","Data":"6d29cbfa9e69b8d3bd2ffcddf884ba437fa29214d4d5a522b5fb902771930e8b"} Jan 05 20:26:24 crc kubenswrapper[4754]: W0105 20:26:24.759445 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9e545f1_ae96_4643_9412_10ada69b1b72.slice/crio-e0c9560b89f833a275f36b59ffa6e45c21606aaf0bdaa41601dfa144effc321f WatchSource:0}: Error finding container e0c9560b89f833a275f36b59ffa6e45c21606aaf0bdaa41601dfa144effc321f: Status 404 returned error can't find the container with id e0c9560b89f833a275f36b59ffa6e45c21606aaf0bdaa41601dfa144effc321f Jan 05 20:26:24 crc kubenswrapper[4754]: W0105 20:26:24.763207 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33325d3_6a5d_4d13_b2c6_ae62a01904df.slice/crio-7ba0ca0d0a1ab71caa6a6404effe802d2be0fb254dd7ee3bf086bc13ab574bed WatchSource:0}: Error finding container 7ba0ca0d0a1ab71caa6a6404effe802d2be0fb254dd7ee3bf086bc13ab574bed: Status 404 returned error can't find the container with id 7ba0ca0d0a1ab71caa6a6404effe802d2be0fb254dd7ee3bf086bc13ab574bed Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.065680 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.091333 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.100324 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rw7pc"] Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.598854 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ed7873-e455-4135-a0b8-5a66a97d957d" path="/var/lib/kubelet/pods/44ed7873-e455-4135-a0b8-5a66a97d957d/volumes" Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.599245 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb" path="/var/lib/kubelet/pods/c0c7fcb5-1cdb-4515-b42f-96b7ea257fdb/volumes" Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.719279 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-psnm7" event={"ID":"c33325d3-6a5d-4d13-b2c6-ae62a01904df","Type":"ContainerStarted","Data":"7ba0ca0d0a1ab71caa6a6404effe802d2be0fb254dd7ee3bf086bc13ab574bed"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.733465 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerStarted","Data":"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.735304 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0","Type":"ContainerStarted","Data":"5cc6e6cd9379690fc2ddbce0e40e6c04b0c5dea1595f3fbd2d8869092f2850f8"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.738364 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43b2550d-6f62-42ff-8b14-20d95a9a4652","Type":"ContainerStarted","Data":"33a5db43b54136b1ed1b50d959d1e2b9d0832f2f824e0ed9a539208bdd5e918f"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.740362 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerStarted","Data":"233156993cda4d7cb82013b273c7957d5d9b756b648ed616dabcd1e40d5d32e1"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.746546 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerStarted","Data":"3a4648f73be8ecd2cef39a686564368b5d9fe3d7bc80d01ed6623e6a1c8b2167"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.749348 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f8c5cd5c9-2vcvq" event={"ID":"b9e545f1-ae96-4643-9412-10ada69b1b72","Type":"ContainerStarted","Data":"dc4a0d7b72db3a8c0e18903dbe8dc5b2edb950dd12411a70d1ca8b0c33575475"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.749422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f8c5cd5c9-2vcvq" event={"ID":"b9e545f1-ae96-4643-9412-10ada69b1b72","Type":"ContainerStarted","Data":"e0c9560b89f833a275f36b59ffa6e45c21606aaf0bdaa41601dfa144effc321f"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.754276 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerStarted","Data":"22582c6983c67f585a15f88e87e0eabd3d51825cd3e01611a2a4963a33af5825"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.762876 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerStarted","Data":"36fc8028157ef4a2ce0b7b8c867c77cbcc311d587be7596e32628c0d80935d81"} Jan 05 20:26:25 crc kubenswrapper[4754]: I0105 20:26:25.888622 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f8c5cd5c9-2vcvq" podStartSLOduration=22.888606255 podStartE2EDuration="22.888606255s" podCreationTimestamp="2026-01-05 20:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:26:25.867767659 +0000 UTC m=+1272.576951533" watchObservedRunningTime="2026-01-05 20:26:25.888606255 +0000 UTC m=+1272.597790129" Jan 05 20:26:26 crc kubenswrapper[4754]: I0105 20:26:26.776466 4754 generic.go:334] "Generic (PLEG): container finished" podID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerID="24a9ff4d123d941af34afc265ca373d308bea42491a26a306bc85e54401436ae" exitCode=0 Jan 05 20:26:26 crc kubenswrapper[4754]: I0105 20:26:26.776657 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerDied","Data":"24a9ff4d123d941af34afc265ca373d308bea42491a26a306bc85e54401436ae"} Jan 05 20:26:28 crc kubenswrapper[4754]: I0105 20:26:28.823436 4754 generic.go:334] "Generic (PLEG): container finished" podID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerID="3a4648f73be8ecd2cef39a686564368b5d9fe3d7bc80d01ed6623e6a1c8b2167" exitCode=0 Jan 05 20:26:28 crc kubenswrapper[4754]: I0105 20:26:28.823505 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerDied","Data":"3a4648f73be8ecd2cef39a686564368b5d9fe3d7bc80d01ed6623e6a1c8b2167"} Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.705083 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dmwqp"] Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.711016 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.716521 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dmwqp"] Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.719203 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.824959 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1de9457-1e7b-4c70-99f7-1214589d91d9-config\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.825016 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovs-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.825210 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rn44\" (UniqueName: \"kubernetes.io/projected/d1de9457-1e7b-4c70-99f7-1214589d91d9-kube-api-access-4rn44\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.825567 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.825681 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovn-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.825726 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rn44\" (UniqueName: \"kubernetes.io/projected/d1de9457-1e7b-4c70-99f7-1214589d91d9-kube-api-access-4rn44\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927736 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovn-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927823 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1de9457-1e7b-4c70-99f7-1214589d91d9-config\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.927920 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovs-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.928105 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovn-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.928176 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d1de9457-1e7b-4c70-99f7-1214589d91d9-ovs-rundir\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.928643 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1de9457-1e7b-4c70-99f7-1214589d91d9-config\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.933638 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.933820 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1de9457-1e7b-4c70-99f7-1214589d91d9-combined-ca-bundle\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:29 crc kubenswrapper[4754]: I0105 20:26:29.946537 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rn44\" (UniqueName: \"kubernetes.io/projected/d1de9457-1e7b-4c70-99f7-1214589d91d9-kube-api-access-4rn44\") pod \"ovn-controller-metrics-dmwqp\" (UID: \"d1de9457-1e7b-4c70-99f7-1214589d91d9\") " pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.045415 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dmwqp" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.255230 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.301113 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.302621 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.335684 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.370391 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm6p2\" (UniqueName: \"kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.370495 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.370576 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.370679 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.383543 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.472475 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm6p2\" (UniqueName: \"kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.472527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.472558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.472598 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.473491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.474435 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.474924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.487887 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.520212 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm6p2\" (UniqueName: \"kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2\") pod \"dnsmasq-dns-5bf47b49b7-txwml\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.590700 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.611444 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.626895 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.654330 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.654998 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.723792 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.724137 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.724231 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.724315 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.724419 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vzh9\" (UniqueName: \"kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.826726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.826780 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.826835 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.826882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.826950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vzh9\" (UniqueName: \"kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.827763 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.828083 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.828484 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.828716 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:30 crc kubenswrapper[4754]: I0105 20:26:30.883149 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vzh9\" (UniqueName: \"kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9\") pod \"dnsmasq-dns-8554648995-vrv5l\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:31 crc kubenswrapper[4754]: I0105 20:26:31.011867 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.734118 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.748525 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.870586 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.870567 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-76z9q" event={"ID":"e5d523be-6b64-4c2b-843e-fdcd6310bff4","Type":"ContainerDied","Data":"2051461316af028744e30674c69649b070212ba72a268cc2958b072d84fa953f"} Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.872385 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" event={"ID":"d3723fc2-f882-421a-acce-3cc120ccdf2b","Type":"ContainerDied","Data":"71f6a9703969fe8b4148a1c9e02c5d8b5f4cd5c4e024930b2798454c62092b8b"} Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.872535 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w6b8n" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.875453 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config\") pod \"d3723fc2-f882-421a-acce-3cc120ccdf2b\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.875574 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config\") pod \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.875675 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc\") pod \"d3723fc2-f882-421a-acce-3cc120ccdf2b\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.875786 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khpwm\" (UniqueName: \"kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm\") pod \"d3723fc2-f882-421a-acce-3cc120ccdf2b\" (UID: \"d3723fc2-f882-421a-acce-3cc120ccdf2b\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876010 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6xg2\" (UniqueName: \"kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2\") pod \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876091 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc\") pod \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\" (UID: \"e5d523be-6b64-4c2b-843e-fdcd6310bff4\") " Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876036 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config" (OuterVolumeSpecName: "config") pod "d3723fc2-f882-421a-acce-3cc120ccdf2b" (UID: "d3723fc2-f882-421a-acce-3cc120ccdf2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876082 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config" (OuterVolumeSpecName: "config") pod "e5d523be-6b64-4c2b-843e-fdcd6310bff4" (UID: "e5d523be-6b64-4c2b-843e-fdcd6310bff4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876609 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3723fc2-f882-421a-acce-3cc120ccdf2b" (UID: "d3723fc2-f882-421a-acce-3cc120ccdf2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5d523be-6b64-4c2b-843e-fdcd6310bff4" (UID: "e5d523be-6b64-4c2b-843e-fdcd6310bff4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876822 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876843 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876853 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d523be-6b64-4c2b-843e-fdcd6310bff4-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.876862 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3723fc2-f882-421a-acce-3cc120ccdf2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.882245 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2" (OuterVolumeSpecName: "kube-api-access-m6xg2") pod "e5d523be-6b64-4c2b-843e-fdcd6310bff4" (UID: "e5d523be-6b64-4c2b-843e-fdcd6310bff4"). InnerVolumeSpecName "kube-api-access-m6xg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.882400 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm" (OuterVolumeSpecName: "kube-api-access-khpwm") pod "d3723fc2-f882-421a-acce-3cc120ccdf2b" (UID: "d3723fc2-f882-421a-acce-3cc120ccdf2b"). InnerVolumeSpecName "kube-api-access-khpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.979309 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khpwm\" (UniqueName: \"kubernetes.io/projected/d3723fc2-f882-421a-acce-3cc120ccdf2b-kube-api-access-khpwm\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:32 crc kubenswrapper[4754]: I0105 20:26:32.979354 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6xg2\" (UniqueName: \"kubernetes.io/projected/e5d523be-6b64-4c2b-843e-fdcd6310bff4-kube-api-access-m6xg2\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.264345 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.277747 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-76z9q"] Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.499403 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.506321 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w6b8n"] Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.599056 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3723fc2-f882-421a-acce-3cc120ccdf2b" path="/var/lib/kubelet/pods/d3723fc2-f882-421a-acce-3cc120ccdf2b/volumes" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.599483 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d523be-6b64-4c2b-843e-fdcd6310bff4" path="/var/lib/kubelet/pods/e5d523be-6b64-4c2b-843e-fdcd6310bff4/volumes" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.606908 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.607076 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.612094 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.890464 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f8c5cd5c9-2vcvq" Jan 05 20:26:33 crc kubenswrapper[4754]: I0105 20:26:33.969051 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.156652 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.170649 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dmwqp"] Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.179577 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:34 crc kubenswrapper[4754]: W0105 20:26:34.505596 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaed02e94_493f_4d8b_b4a1_7c01e4a0c782.slice/crio-ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113 WatchSource:0}: Error finding container ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113: Status 404 returned error can't find the container with id ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113 Jan 05 20:26:34 crc kubenswrapper[4754]: W0105 20:26:34.526188 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61b376e_842b_418a_ad71_202da01407ed.slice/crio-e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7 WatchSource:0}: Error finding container e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7: Status 404 returned error can't find the container with id e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7 Jan 05 20:26:34 crc kubenswrapper[4754]: W0105 20:26:34.535475 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1de9457_1e7b_4c70_99f7_1214589d91d9.slice/crio-183ab3953c7097a187b82e6b024b423420abf9b2d61a5c65f47728463f42e9e6 WatchSource:0}: Error finding container 183ab3953c7097a187b82e6b024b423420abf9b2d61a5c65f47728463f42e9e6: Status 404 returned error can't find the container with id 183ab3953c7097a187b82e6b024b423420abf9b2d61a5c65f47728463f42e9e6 Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.903527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d0a4f96f-678d-42ec-8302-6a27a4477941","Type":"ContainerStarted","Data":"dc8d5fc1a9596c771377681547e306bf210b84730363c7c69a9b16a29825dc59"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.903946 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.906678 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dmwqp" event={"ID":"d1de9457-1e7b-4c70-99f7-1214589d91d9","Type":"ContainerStarted","Data":"183ab3953c7097a187b82e6b024b423420abf9b2d61a5c65f47728463f42e9e6"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.911039 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" event={"ID":"d61b376e-842b-418a-ad71-202da01407ed","Type":"ContainerStarted","Data":"e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.919451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerStarted","Data":"345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.928731 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.079163845 podStartE2EDuration="35.92871364s" podCreationTimestamp="2026-01-05 20:25:59 +0000 UTC" firstStartedPulling="2026-01-05 20:26:24.47118891 +0000 UTC m=+1271.180372784" lastFinishedPulling="2026-01-05 20:26:33.320738705 +0000 UTC m=+1280.029922579" observedRunningTime="2026-01-05 20:26:34.924999893 +0000 UTC m=+1281.634183767" watchObservedRunningTime="2026-01-05 20:26:34.92871364 +0000 UTC m=+1281.637897514" Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.933995 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" event={"ID":"845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d","Type":"ContainerStarted","Data":"10655aa96bf8b8bcf49c6e749f8e58448efdebde0726880dcd1ef8541d329509"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.952431 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vrv5l" event={"ID":"aed02e94-493f-4d8b-b4a1-7c01e4a0c782","Type":"ContainerStarted","Data":"ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113"} Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.953283 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=36.953265203 podStartE2EDuration="36.953265203s" podCreationTimestamp="2026-01-05 20:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:26:34.948604481 +0000 UTC m=+1281.657788355" watchObservedRunningTime="2026-01-05 20:26:34.953265203 +0000 UTC m=+1281.662449067" Jan 05 20:26:34 crc kubenswrapper[4754]: I0105 20:26:34.968623 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gf6gw" podStartSLOduration=23.382463205 podStartE2EDuration="32.968597464s" podCreationTimestamp="2026-01-05 20:26:02 +0000 UTC" firstStartedPulling="2026-01-05 20:26:23.976106319 +0000 UTC m=+1270.685290233" lastFinishedPulling="2026-01-05 20:26:33.562240618 +0000 UTC m=+1280.271424492" observedRunningTime="2026-01-05 20:26:34.963397638 +0000 UTC m=+1281.672581512" watchObservedRunningTime="2026-01-05 20:26:34.968597464 +0000 UTC m=+1281.677781338" Jan 05 20:26:35 crc kubenswrapper[4754]: I0105 20:26:35.015887 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerStarted","Data":"b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948"} Jan 05 20:26:35 crc kubenswrapper[4754]: I0105 20:26:35.066238 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.736394612 podStartE2EDuration="38.06621805s" podCreationTimestamp="2026-01-05 20:25:57 +0000 UTC" firstStartedPulling="2026-01-05 20:26:07.348143048 +0000 UTC m=+1254.057326922" lastFinishedPulling="2026-01-05 20:26:22.677966486 +0000 UTC m=+1269.387150360" observedRunningTime="2026-01-05 20:26:35.057879071 +0000 UTC m=+1281.767062945" watchObservedRunningTime="2026-01-05 20:26:35.06621805 +0000 UTC m=+1281.775401924" Jan 05 20:26:37 crc kubenswrapper[4754]: I0105 20:26:37.042272 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43b2550d-6f62-42ff-8b14-20d95a9a4652","Type":"ContainerStarted","Data":"054ba51e93880bcb8a05695e7692ad3b5e2cbc4da5435727a454ed9a9b6ab4c2"} Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.782421 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.782722 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.821517 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.823970 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.833875 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.943949 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.944179 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:38 crc kubenswrapper[4754]: I0105 20:26:38.944225 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxtm9\" (UniqueName: \"kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.046723 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.046798 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxtm9\" (UniqueName: \"kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.046880 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.047214 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.047446 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.073499 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxtm9\" (UniqueName: \"kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9\") pod \"redhat-operators-ct9fs\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.158740 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:39 crc kubenswrapper[4754]: W0105 20:26:39.674886 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9ad3a07_4d41_42c2_afcf_568e6c3b83e2.slice/crio-cb5aea81555d806cc9828e9f6722119c86aab520c29bf358b53f1719b998d915 WatchSource:0}: Error finding container cb5aea81555d806cc9828e9f6722119c86aab520c29bf358b53f1719b998d915: Status 404 returned error can't find the container with id cb5aea81555d806cc9828e9f6722119c86aab520c29bf358b53f1719b998d915 Jan 05 20:26:39 crc kubenswrapper[4754]: I0105 20:26:39.680417 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:26:40 crc kubenswrapper[4754]: I0105 20:26:40.071573 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerStarted","Data":"cb5aea81555d806cc9828e9f6722119c86aab520c29bf358b53f1719b998d915"} Jan 05 20:26:40 crc kubenswrapper[4754]: I0105 20:26:40.342494 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 05 20:26:40 crc kubenswrapper[4754]: I0105 20:26:40.387619 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:40 crc kubenswrapper[4754]: I0105 20:26:40.388443 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:41 crc kubenswrapper[4754]: I0105 20:26:41.084679 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b658733a-9c58-4287-82a3-b49d21e53e53","Type":"ContainerStarted","Data":"e305d62840e474ad18fcd232bf421ab1fc0fb1ecb254e282bf36129331994891"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.116978 4754 generic.go:334] "Generic (PLEG): container finished" podID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerID="34d6ce69e979eae21cd0a2feed9806ff1eb9a62782b72da609ee2e88a853df5e" exitCode=0 Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.117381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerDied","Data":"34d6ce69e979eae21cd0a2feed9806ff1eb9a62782b72da609ee2e88a853df5e"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.158964 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0","Type":"ContainerStarted","Data":"88eee32e42dcf18d574bd75024e7697269936faffdfc10116e0974f647df0269"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.179248 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.210688 4754 generic.go:334] "Generic (PLEG): container finished" podID="d61b376e-842b-418a-ad71-202da01407ed" containerID="3798a5b7c351f60e8e5dc16c1fde8bc92e2c39bb434fce202da659b6704f16bc" exitCode=0 Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.210765 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" event={"ID":"d61b376e-842b-418a-ad71-202da01407ed","Type":"ContainerDied","Data":"3798a5b7c351f60e8e5dc16c1fde8bc92e2c39bb434fce202da659b6704f16bc"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.249106 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.251272 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.298074 4754 generic.go:334] "Generic (PLEG): container finished" podID="c33325d3-6a5d-4d13-b2c6-ae62a01904df" containerID="b61047159131ff5929b0e63887a2e44043f22c50b3da88017905eed13952cd01" exitCode=0 Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.299134 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-psnm7" event={"ID":"c33325d3-6a5d-4d13-b2c6-ae62a01904df","Type":"ContainerDied","Data":"b61047159131ff5929b0e63887a2e44043f22c50b3da88017905eed13952cd01"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.307412 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.328420 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.328460 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.328482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.328507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wcps\" (UniqueName: \"kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.328524 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.347191 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerStarted","Data":"9290eed6896593eb91d3c212bbfc132ea8a9a99563c4a1ef07f8f1cfa9f58266"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.387926 4754 generic.go:334] "Generic (PLEG): container finished" podID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerID="7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b" exitCode=0 Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.388355 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vrv5l" event={"ID":"aed02e94-493f-4d8b-b4a1-7c01e4a0c782","Type":"ContainerDied","Data":"7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.430792 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.431040 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.431063 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.431108 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wcps\" (UniqueName: \"kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.431126 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.487105 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.487665 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.488796 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.494567 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n" event={"ID":"5212dab3-1e5c-48b6-a710-f3551ab2ceaf","Type":"ContainerStarted","Data":"33005db6fefd18e74d743e2dc838ab7777f9d0170124bdf9012918d7626d2b6f"} Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.494649 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.498615 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pbz4n" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.501615 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.528695 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wcps\" (UniqueName: \"kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps\") pod \"dnsmasq-dns-b8fbc5445-8t74b\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.565540 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=30.915468338 podStartE2EDuration="41.565496978s" podCreationTimestamp="2026-01-05 20:26:01 +0000 UTC" firstStartedPulling="2026-01-05 20:26:23.977441594 +0000 UTC m=+1270.686625478" lastFinishedPulling="2026-01-05 20:26:34.627470234 +0000 UTC m=+1281.336654118" observedRunningTime="2026-01-05 20:26:42.543989385 +0000 UTC m=+1289.253173259" watchObservedRunningTime="2026-01-05 20:26:42.565496978 +0000 UTC m=+1289.274680852" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.576934 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pbz4n" podStartSLOduration=27.874569775 podStartE2EDuration="36.576917927s" podCreationTimestamp="2026-01-05 20:26:06 +0000 UTC" firstStartedPulling="2026-01-05 20:26:24.617632093 +0000 UTC m=+1271.326815967" lastFinishedPulling="2026-01-05 20:26:33.319980245 +0000 UTC m=+1280.029164119" observedRunningTime="2026-01-05 20:26:42.574132394 +0000 UTC m=+1289.283316268" watchObservedRunningTime="2026-01-05 20:26:42.576917927 +0000 UTC m=+1289.286101801" Jan 05 20:26:42 crc kubenswrapper[4754]: I0105 20:26:42.634413 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.879906 4754 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 05 20:26:42 crc kubenswrapper[4754]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d61b376e-842b-418a-ad71-202da01407ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 20:26:42 crc kubenswrapper[4754]: > podSandboxID="e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.880497 4754 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 20:26:42 crc kubenswrapper[4754]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bh65dh95hf6h595hf6hf5h59dh6h57dh558h55ch5dbh5f5h565h5f7h9fh76h58ch54dh84h59bh7fh6bh5b9h59h67fh566h56h5f4h554h58fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sm6p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bf47b49b7-txwml_openstack(d61b376e-842b-418a-ad71-202da01407ed): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d61b376e-842b-418a-ad71-202da01407ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 20:26:42 crc kubenswrapper[4754]: > logger="UnhandledError" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.881601 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d61b376e-842b-418a-ad71-202da01407ed/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" podUID="d61b376e-842b-418a-ad71-202da01407ed" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.888623 4754 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 05 20:26:42 crc kubenswrapper[4754]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/aed02e94-493f-4d8b-b4a1-7c01e4a0c782/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 20:26:42 crc kubenswrapper[4754]: > podSandboxID="ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.888798 4754 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 05 20:26:42 crc kubenswrapper[4754]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8554648995-vrv5l_openstack(aed02e94-493f-4d8b-b4a1-7c01e4a0c782): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/aed02e94-493f-4d8b-b4a1-7c01e4a0c782/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 05 20:26:42 crc kubenswrapper[4754]: > logger="UnhandledError" Jan 05 20:26:42 crc kubenswrapper[4754]: E0105 20:26:42.890447 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/aed02e94-493f-4d8b-b4a1-7c01e4a0c782/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8554648995-vrv5l" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" Jan 05 20:26:43 crc kubenswrapper[4754]: E0105 20:26:43.206259 4754 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:55642->38.102.83.201:33767: write tcp 38.102.83.201:55642->38.102.83.201:33767: write: broken pipe Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.382544 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.398467 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.409785 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.411664 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.413962 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bgs5p" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.414156 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.414267 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.427308 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.480551 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-psnm7" event={"ID":"c33325d3-6a5d-4d13-b2c6-ae62a01904df","Type":"ContainerStarted","Data":"97e739c70ec11e6332d21c5a878bb078f28cde6445ca4f679d8a403da9034445"} Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.482259 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" event={"ID":"4983104d-604b-476c-94bf-b92cf0a887b4","Type":"ContainerStarted","Data":"ffa741472627265fc8771cc25a1de2a8bf1f9a3d23702c6c8bd819facf13a112"} Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.485870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerStarted","Data":"40b35701b5376d94d268dc1ff540e21d1a3fdd5ec8b25e94c79478918c128630"} Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.501084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-cache\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.501197 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.501226 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-lock\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.501466 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.501495 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw6mq\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-kube-api-access-pw6mq\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.614624 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.614660 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-lock\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.615000 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.615046 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw6mq\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-kube-api-access-pw6mq\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.615209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-cache\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: E0105 20:26:43.616191 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:43 crc kubenswrapper[4754]: E0105 20:26:43.616211 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:43 crc kubenswrapper[4754]: E0105 20:26:43.616250 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:26:44.116233022 +0000 UTC m=+1290.825416896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.616806 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-lock\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.622631 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-cache\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.630782 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.630825 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1906edc1caf17bdc1812f67b85d00f757ef476e0834edbda6d663d27522edc1/globalmount\"" pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.647152 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw6mq\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-kube-api-access-pw6mq\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:43 crc kubenswrapper[4754]: I0105 20:26:43.690698 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5807fd55-1c8e-4b97-ab27-9180b6aa20cf\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.016993 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.145442 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc\") pod \"d61b376e-842b-418a-ad71-202da01407ed\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.145514 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config\") pod \"d61b376e-842b-418a-ad71-202da01407ed\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.145538 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb\") pod \"d61b376e-842b-418a-ad71-202da01407ed\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.145739 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm6p2\" (UniqueName: \"kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2\") pod \"d61b376e-842b-418a-ad71-202da01407ed\" (UID: \"d61b376e-842b-418a-ad71-202da01407ed\") " Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.146213 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:44 crc kubenswrapper[4754]: E0105 20:26:44.146577 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:44 crc kubenswrapper[4754]: E0105 20:26:44.146592 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:44 crc kubenswrapper[4754]: E0105 20:26:44.146637 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:26:45.146621157 +0000 UTC m=+1291.855805031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.153504 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2" (OuterVolumeSpecName: "kube-api-access-sm6p2") pod "d61b376e-842b-418a-ad71-202da01407ed" (UID: "d61b376e-842b-418a-ad71-202da01407ed"). InnerVolumeSpecName "kube-api-access-sm6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.204593 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d61b376e-842b-418a-ad71-202da01407ed" (UID: "d61b376e-842b-418a-ad71-202da01407ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.204962 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d61b376e-842b-418a-ad71-202da01407ed" (UID: "d61b376e-842b-418a-ad71-202da01407ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.215608 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config" (OuterVolumeSpecName: "config") pod "d61b376e-842b-418a-ad71-202da01407ed" (UID: "d61b376e-842b-418a-ad71-202da01407ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.248748 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm6p2\" (UniqueName: \"kubernetes.io/projected/d61b376e-842b-418a-ad71-202da01407ed-kube-api-access-sm6p2\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.248782 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.248792 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.248801 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61b376e-842b-418a-ad71-202da01407ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.499114 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-psnm7" event={"ID":"c33325d3-6a5d-4d13-b2c6-ae62a01904df","Type":"ContainerStarted","Data":"e8541345f5b48fbf6ccae977379eb2ad029925e31270668e7ba85ac0e08088ac"} Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.499168 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.499184 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.505947 4754 generic.go:334] "Generic (PLEG): container finished" podID="4983104d-604b-476c-94bf-b92cf0a887b4" containerID="390f00b23e9921442a154e36532f1cc7a6b71f4d7e46a76dc4eb1cd5d9673bb8" exitCode=0 Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.506344 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" event={"ID":"4983104d-604b-476c-94bf-b92cf0a887b4","Type":"ContainerDied","Data":"390f00b23e9921442a154e36532f1cc7a6b71f4d7e46a76dc4eb1cd5d9673bb8"} Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.514463 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vrv5l" event={"ID":"aed02e94-493f-4d8b-b4a1-7c01e4a0c782","Type":"ContainerStarted","Data":"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc"} Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.514809 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.517288 4754 generic.go:334] "Generic (PLEG): container finished" podID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerID="40b35701b5376d94d268dc1ff540e21d1a3fdd5ec8b25e94c79478918c128630" exitCode=0 Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.517394 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerDied","Data":"40b35701b5376d94d268dc1ff540e21d1a3fdd5ec8b25e94c79478918c128630"} Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.521363 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" event={"ID":"d61b376e-842b-418a-ad71-202da01407ed","Type":"ContainerDied","Data":"e259582c4158e601d312ceed8f0b4d8f273e5b6b328aca26c30ced090d0324a7"} Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.521406 4754 scope.go:117] "RemoveContainer" containerID="3798a5b7c351f60e8e5dc16c1fde8bc92e2c39bb434fce202da659b6704f16bc" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.521522 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-txwml" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.549134 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-psnm7" podStartSLOduration=29.9838244 podStartE2EDuration="38.549118864s" podCreationTimestamp="2026-01-05 20:26:06 +0000 UTC" firstStartedPulling="2026-01-05 20:26:24.765202307 +0000 UTC m=+1271.474386181" lastFinishedPulling="2026-01-05 20:26:33.330496771 +0000 UTC m=+1280.039680645" observedRunningTime="2026-01-05 20:26:44.544814391 +0000 UTC m=+1291.253998265" watchObservedRunningTime="2026-01-05 20:26:44.549118864 +0000 UTC m=+1291.258302738" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.582520 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.621358 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vrv5l" podStartSLOduration=7.637419237 podStartE2EDuration="14.621336794s" podCreationTimestamp="2026-01-05 20:26:30 +0000 UTC" firstStartedPulling="2026-01-05 20:26:34.515086762 +0000 UTC m=+1281.224270636" lastFinishedPulling="2026-01-05 20:26:41.499004319 +0000 UTC m=+1288.208188193" observedRunningTime="2026-01-05 20:26:44.615759108 +0000 UTC m=+1291.324942992" watchObservedRunningTime="2026-01-05 20:26:44.621336794 +0000 UTC m=+1291.330520668" Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.771111 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:44 crc kubenswrapper[4754]: I0105 20:26:44.776042 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-txwml"] Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.007678 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.187316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:45 crc kubenswrapper[4754]: E0105 20:26:45.187574 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:45 crc kubenswrapper[4754]: E0105 20:26:45.187598 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:45 crc kubenswrapper[4754]: E0105 20:26:45.187643 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:26:47.187627319 +0000 UTC m=+1293.896811213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.536708 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" event={"ID":"4983104d-604b-476c-94bf-b92cf0a887b4","Type":"ContainerStarted","Data":"c0595bbd15750527785046a814e9616ce55865a4b6d1ef0785ff18150e721542"} Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.537264 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.558907 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" podStartSLOduration=3.558887708 podStartE2EDuration="3.558887708s" podCreationTimestamp="2026-01-05 20:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:26:45.552647165 +0000 UTC m=+1292.261831049" watchObservedRunningTime="2026-01-05 20:26:45.558887708 +0000 UTC m=+1292.268071582" Jan 05 20:26:45 crc kubenswrapper[4754]: I0105 20:26:45.602615 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61b376e-842b-418a-ad71-202da01407ed" path="/var/lib/kubelet/pods/d61b376e-842b-418a-ad71-202da01407ed/volumes" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.196498 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ccf86"] Jan 05 20:26:47 crc kubenswrapper[4754]: E0105 20:26:47.196952 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61b376e-842b-418a-ad71-202da01407ed" containerName="init" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.196967 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61b376e-842b-418a-ad71-202da01407ed" containerName="init" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.197225 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61b376e-842b-418a-ad71-202da01407ed" containerName="init" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.198038 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.203226 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.203806 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.204866 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.210486 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ccf86"] Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.242316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:47 crc kubenswrapper[4754]: E0105 20:26:47.242489 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:47 crc kubenswrapper[4754]: E0105 20:26:47.242516 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:47 crc kubenswrapper[4754]: E0105 20:26:47.242574 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:26:51.242551843 +0000 UTC m=+1297.951735727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.344656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345172 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345224 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdlh\" (UniqueName: \"kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345399 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345472 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.345880 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.363764 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ccf86"] Jan 05 20:26:47 crc kubenswrapper[4754]: E0105 20:26:47.364563 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-5jdlh ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-ccf86" podUID="46a0d867-8926-4a85-89fc-f7c825d14922" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.416277 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-l252x"] Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.417480 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.427198 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l252x"] Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.447847 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.447924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.447944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdlh\" (UniqueName: \"kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.447987 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.448019 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.448037 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.448073 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.448989 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.449252 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.450028 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.469401 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.469497 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.471821 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.472149 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdlh\" (UniqueName: \"kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh\") pod \"swift-ring-rebalance-ccf86\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549694 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549778 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549808 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549870 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phtc\" (UniqueName: \"kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549973 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.549998 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.568541 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.580440 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.651553 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.651880 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.651958 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts" (OuterVolumeSpecName: "scripts") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652065 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652151 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652353 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652487 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdlh\" (UniqueName: \"kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652589 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift\") pod \"46a0d867-8926-4a85-89fc-f7c825d14922\" (UID: \"46a0d867-8926-4a85-89fc-f7c825d14922\") " Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.652867 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653029 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phtc\" (UniqueName: \"kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653177 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653358 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653473 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653079 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653682 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653759 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.653844 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.654010 4754 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46a0d867-8926-4a85-89fc-f7c825d14922-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.654080 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.654144 4754 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46a0d867-8926-4a85-89fc-f7c825d14922-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.654548 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.656035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.656361 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.656464 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.657886 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.658689 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.659577 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh" (OuterVolumeSpecName: "kube-api-access-5jdlh") pod "46a0d867-8926-4a85-89fc-f7c825d14922" (UID: "46a0d867-8926-4a85-89fc-f7c825d14922"). InnerVolumeSpecName "kube-api-access-5jdlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.660854 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.662040 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.666196 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.685772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phtc\" (UniqueName: \"kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc\") pod \"swift-ring-rebalance-l252x\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.736956 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.756757 4754 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.756802 4754 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.756821 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdlh\" (UniqueName: \"kubernetes.io/projected/46a0d867-8926-4a85-89fc-f7c825d14922-kube-api-access-5jdlh\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:47 crc kubenswrapper[4754]: I0105 20:26:47.756840 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a0d867-8926-4a85-89fc-f7c825d14922-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.108873 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.108968 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.581156 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerID="9290eed6896593eb91d3c212bbfc132ea8a9a99563c4a1ef07f8f1cfa9f58266" exitCode=0 Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.581348 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ccf86" Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.581396 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerDied","Data":"9290eed6896593eb91d3c212bbfc132ea8a9a99563c4a1ef07f8f1cfa9f58266"} Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.666515 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ccf86"] Jan 05 20:26:48 crc kubenswrapper[4754]: I0105 20:26:48.705950 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-ccf86"] Jan 05 20:26:49 crc kubenswrapper[4754]: I0105 20:26:49.602133 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a0d867-8926-4a85-89fc-f7c825d14922" path="/var/lib/kubelet/pods/46a0d867-8926-4a85-89fc-f7c825d14922/volumes" Jan 05 20:26:50 crc kubenswrapper[4754]: I0105 20:26:50.796853 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l252x"] Jan 05 20:26:50 crc kubenswrapper[4754]: I0105 20:26:50.953704 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.013527 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.047265 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.265801 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:51 crc kubenswrapper[4754]: E0105 20:26:51.266004 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:51 crc kubenswrapper[4754]: E0105 20:26:51.266043 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:51 crc kubenswrapper[4754]: E0105 20:26:51.266147 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:26:59.266103982 +0000 UTC m=+1305.975287856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.624139 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l252x" event={"ID":"2dedf888-afb9-42e6-80fa-3135a67787db","Type":"ContainerStarted","Data":"7a8a4fccddcd6e2849d174689815cc7ad18592080d81f103310beb0e038c5e6d"} Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.639851 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerStarted","Data":"766d87aa306512213e7c5df874d5e01e8cbfc5e8fca15efb5f8b46ac2239b06b"} Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.655599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4f6ded3-ca17-4343-a4ee-15df3c64d1c0","Type":"ContainerStarted","Data":"e4b10562f9e7fe7489cdd28e9800dff723aed983082b6bcdb357f121bab4a709"} Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.676592 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dmwqp" event={"ID":"d1de9457-1e7b-4c70-99f7-1214589d91d9","Type":"ContainerStarted","Data":"683a35bed76de280889e23388ef4445153c3eb0f51268d65aab22e749fda86ee"} Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.682473 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ct9fs" podStartSLOduration=5.435231666 podStartE2EDuration="13.682458252s" podCreationTimestamp="2026-01-05 20:26:38 +0000 UTC" firstStartedPulling="2026-01-05 20:26:42.119826121 +0000 UTC m=+1288.829009995" lastFinishedPulling="2026-01-05 20:26:50.367052707 +0000 UTC m=+1297.076236581" observedRunningTime="2026-01-05 20:26:51.678310223 +0000 UTC m=+1298.387494097" watchObservedRunningTime="2026-01-05 20:26:51.682458252 +0000 UTC m=+1298.391642126" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.717479 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43b2550d-6f62-42ff-8b14-20d95a9a4652","Type":"ContainerStarted","Data":"26e539498221921d6ec885c70c8b9d42a52ee4b8dfd0e7d7f05cbae7f186d1e8"} Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.736632 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dmwqp" podStartSLOduration=6.952586743 podStartE2EDuration="22.73661422s" podCreationTimestamp="2026-01-05 20:26:29 +0000 UTC" firstStartedPulling="2026-01-05 20:26:34.561030794 +0000 UTC m=+1281.270214708" lastFinishedPulling="2026-01-05 20:26:50.345058311 +0000 UTC m=+1297.054242185" observedRunningTime="2026-01-05 20:26:51.717930261 +0000 UTC m=+1298.427114135" watchObservedRunningTime="2026-01-05 20:26:51.73661422 +0000 UTC m=+1298.445798094" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.843441 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.347001496 podStartE2EDuration="44.843417806s" podCreationTimestamp="2026-01-05 20:26:07 +0000 UTC" firstStartedPulling="2026-01-05 20:26:24.869105817 +0000 UTC m=+1271.578289681" lastFinishedPulling="2026-01-05 20:26:50.365522117 +0000 UTC m=+1297.074705991" observedRunningTime="2026-01-05 20:26:51.791656141 +0000 UTC m=+1298.500840015" watchObservedRunningTime="2026-01-05 20:26:51.843417806 +0000 UTC m=+1298.552601680" Jan 05 20:26:51 crc kubenswrapper[4754]: I0105 20:26:51.873714 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.661954111 podStartE2EDuration="47.873683428s" podCreationTimestamp="2026-01-05 20:26:04 +0000 UTC" firstStartedPulling="2026-01-05 20:26:25.20500566 +0000 UTC m=+1271.914189534" lastFinishedPulling="2026-01-05 20:26:50.416734977 +0000 UTC m=+1297.125918851" observedRunningTime="2026-01-05 20:26:51.848770746 +0000 UTC m=+1298.557954610" watchObservedRunningTime="2026-01-05 20:26:51.873683428 +0000 UTC m=+1298.582867302" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.056445 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-rwh69"] Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.057975 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.088390 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-rwh69"] Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.100430 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbmvs\" (UniqueName: \"kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.100649 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.185789 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.202516 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbmvs\" (UniqueName: \"kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.202659 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.203613 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.242040 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbmvs\" (UniqueName: \"kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs\") pod \"mysqld-exporter-openstack-db-create-rwh69\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.371238 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-cbf0-account-create-update-wjqq8"] Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.372819 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.377720 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.383410 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-cbf0-account-create-update-wjqq8"] Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.389936 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.407513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.407928 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpg76\" (UniqueName: \"kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.511800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.511887 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpg76\" (UniqueName: \"kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.512616 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.537808 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpg76\" (UniqueName: \"kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76\") pod \"mysqld-exporter-cbf0-account-create-update-wjqq8\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.636481 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.699513 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.702521 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.703774 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vrv5l" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="dnsmasq-dns" containerID="cri-o://1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc" gracePeriod=10 Jan 05 20:26:52 crc kubenswrapper[4754]: I0105 20:26:52.997618 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-rwh69"] Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.432545 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.480973 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-cbf0-account-create-update-wjqq8"] Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.546106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc\") pod \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.546514 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb\") pod \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.546623 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vzh9\" (UniqueName: \"kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9\") pod \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.546647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb\") pod \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.546681 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config\") pod \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\" (UID: \"aed02e94-493f-4d8b-b4a1-7c01e4a0c782\") " Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.562738 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9" (OuterVolumeSpecName: "kube-api-access-5vzh9") pod "aed02e94-493f-4d8b-b4a1-7c01e4a0c782" (UID: "aed02e94-493f-4d8b-b4a1-7c01e4a0c782"). InnerVolumeSpecName "kube-api-access-5vzh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.619046 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aed02e94-493f-4d8b-b4a1-7c01e4a0c782" (UID: "aed02e94-493f-4d8b-b4a1-7c01e4a0c782"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.645915 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aed02e94-493f-4d8b-b4a1-7c01e4a0c782" (UID: "aed02e94-493f-4d8b-b4a1-7c01e4a0c782"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.655569 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aed02e94-493f-4d8b-b4a1-7c01e4a0c782" (UID: "aed02e94-493f-4d8b-b4a1-7c01e4a0c782"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.664090 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vzh9\" (UniqueName: \"kubernetes.io/projected/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-kube-api-access-5vzh9\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.664133 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.664146 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.664156 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.708164 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config" (OuterVolumeSpecName: "config") pod "aed02e94-493f-4d8b-b4a1-7c01e4a0c782" (UID: "aed02e94-493f-4d8b-b4a1-7c01e4a0c782"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.766802 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed02e94-493f-4d8b-b4a1-7c01e4a0c782-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.791628 4754 generic.go:334] "Generic (PLEG): container finished" podID="36a1a647-6525-4da5-9e92-5478fc979997" containerID="69a17ff3c459065b4d6bb253ef0b04b566234552e4feab0e23d32a280fea9810" exitCode=0 Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.791738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" event={"ID":"36a1a647-6525-4da5-9e92-5478fc979997","Type":"ContainerDied","Data":"69a17ff3c459065b4d6bb253ef0b04b566234552e4feab0e23d32a280fea9810"} Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.791775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" event={"ID":"36a1a647-6525-4da5-9e92-5478fc979997","Type":"ContainerStarted","Data":"92669a93f7804dd2197d45adda9921a445fd841aa98aaf6ef944c29a7ea0b118"} Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.806715 4754 generic.go:334] "Generic (PLEG): container finished" podID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerID="1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc" exitCode=0 Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.806812 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vrv5l" event={"ID":"aed02e94-493f-4d8b-b4a1-7c01e4a0c782","Type":"ContainerDied","Data":"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc"} Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.806847 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vrv5l" event={"ID":"aed02e94-493f-4d8b-b4a1-7c01e4a0c782","Type":"ContainerDied","Data":"ff96c993808f343338eac29bccf95df8c088a411fada48fe6e9dfb74095d5113"} Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.806867 4754 scope.go:117] "RemoveContainer" containerID="1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.806853 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vrv5l" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.885685 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" event={"ID":"a3c62584-d1a0-43b1-afe9-ab12f4125396","Type":"ContainerStarted","Data":"104b66fb77635d97f5f4f860fb02636d40a24b8dc5a08ec775e7e9ddbbfe35fc"} Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.937880 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.946606 4754 scope.go:117] "RemoveContainer" containerID="7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.948047 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vrv5l"] Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.988982 4754 scope.go:117] "RemoveContainer" containerID="1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc" Jan 05 20:26:53 crc kubenswrapper[4754]: E0105 20:26:53.989956 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc\": container with ID starting with 1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc not found: ID does not exist" containerID="1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.990001 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc"} err="failed to get container status \"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc\": rpc error: code = NotFound desc = could not find container \"1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc\": container with ID starting with 1f47de827fb3f4cde57ec809cebdc3a282468e1e06950b44f4733709fd803bfc not found: ID does not exist" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.990043 4754 scope.go:117] "RemoveContainer" containerID="7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b" Jan 05 20:26:53 crc kubenswrapper[4754]: E0105 20:26:53.990527 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b\": container with ID starting with 7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b not found: ID does not exist" containerID="7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b" Jan 05 20:26:53 crc kubenswrapper[4754]: I0105 20:26:53.990555 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b"} err="failed to get container status \"7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b\": rpc error: code = NotFound desc = could not find container \"7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b\": container with ID starting with 7bb0b705439a84e29d6d07c0a7afbff590533789929fe1928bca1e4341d4449b not found: ID does not exist" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.608378 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.608993 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.615248 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.853367 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.859274 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.898263 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3c62584-d1a0-43b1-afe9-ab12f4125396" containerID="e0835e632cd18a6ad13cd82a605ee7d6a9ec2d402a0e45a2815a34352cf5fb4e" exitCode=0 Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.899448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" event={"ID":"a3c62584-d1a0-43b1-afe9-ab12f4125396","Type":"ContainerDied","Data":"e0835e632cd18a6ad13cd82a605ee7d6a9ec2d402a0e45a2815a34352cf5fb4e"} Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.899877 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.955824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 05 20:26:54 crc kubenswrapper[4754]: I0105 20:26:54.979850 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.249592 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 05 20:26:55 crc kubenswrapper[4754]: E0105 20:26:55.250005 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="dnsmasq-dns" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.250018 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="dnsmasq-dns" Jan 05 20:26:55 crc kubenswrapper[4754]: E0105 20:26:55.250055 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="init" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.250062 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="init" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.250230 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" containerName="dnsmasq-dns" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.258919 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.270710 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.270825 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.270958 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kk8mm" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.271460 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.281928 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329763 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329806 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-config\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329827 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329849 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbt2\" (UniqueName: \"kubernetes.io/projected/bd134f60-e97c-487b-be9e-c356c7478c21-kube-api-access-tnbt2\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329905 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329922 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-scripts\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.329937 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432492 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-scripts\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432515 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432695 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432721 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-config\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432744 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432770 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbt2\" (UniqueName: \"kubernetes.io/projected/bd134f60-e97c-487b-be9e-c356c7478c21-kube-api-access-tnbt2\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.432913 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.434899 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-config\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.435148 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd134f60-e97c-487b-be9e-c356c7478c21-scripts\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.439663 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.440392 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.449220 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbt2\" (UniqueName: \"kubernetes.io/projected/bd134f60-e97c-487b-be9e-c356c7478c21-kube-api-access-tnbt2\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.467613 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd134f60-e97c-487b-be9e-c356c7478c21-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd134f60-e97c-487b-be9e-c356c7478c21\") " pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.588236 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 20:26:55 crc kubenswrapper[4754]: I0105 20:26:55.604477 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed02e94-493f-4d8b-b4a1-7c01e4a0c782" path="/var/lib/kubelet/pods/aed02e94-493f-4d8b-b4a1-7c01e4a0c782/volumes" Jan 05 20:26:56 crc kubenswrapper[4754]: I0105 20:26:56.925678 4754 generic.go:334] "Generic (PLEG): container finished" podID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerID="233156993cda4d7cb82013b273c7957d5d9b756b648ed616dabcd1e40d5d32e1" exitCode=0 Jan 05 20:26:56 crc kubenswrapper[4754]: I0105 20:26:56.925792 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerDied","Data":"233156993cda4d7cb82013b273c7957d5d9b756b648ed616dabcd1e40d5d32e1"} Jan 05 20:26:56 crc kubenswrapper[4754]: I0105 20:26:56.932415 4754 generic.go:334] "Generic (PLEG): container finished" podID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerID="22582c6983c67f585a15f88e87e0eabd3d51825cd3e01611a2a4963a33af5825" exitCode=0 Jan 05 20:26:56 crc kubenswrapper[4754]: I0105 20:26:56.932819 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerDied","Data":"22582c6983c67f585a15f88e87e0eabd3d51825cd3e01611a2a4963a33af5825"} Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.377819 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b8n94"] Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.392597 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.402125 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.407068 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b8n94"] Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.495430 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.495528 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwszt\" (UniqueName: \"kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.601966 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.602044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwszt\" (UniqueName: \"kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.602735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.624424 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwszt\" (UniqueName: \"kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt\") pod \"root-account-create-update-b8n94\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.768397 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b8n94" Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.969623 4754 generic.go:334] "Generic (PLEG): container finished" podID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerID="36fc8028157ef4a2ce0b7b8c867c77cbcc311d587be7596e32628c0d80935d81" exitCode=0 Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.969718 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerDied","Data":"36fc8028157ef4a2ce0b7b8c867c77cbcc311d587be7596e32628c0d80935d81"} Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.982084 4754 generic.go:334] "Generic (PLEG): container finished" podID="909351bf-3608-40e6-9f93-bffa1ed74945" containerID="3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602" exitCode=0 Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.982378 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerDied","Data":"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602"} Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.989665 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" event={"ID":"a3c62584-d1a0-43b1-afe9-ab12f4125396","Type":"ContainerDied","Data":"104b66fb77635d97f5f4f860fb02636d40a24b8dc5a08ec775e7e9ddbbfe35fc"} Jan 05 20:26:57 crc kubenswrapper[4754]: I0105 20:26:57.989722 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="104b66fb77635d97f5f4f860fb02636d40a24b8dc5a08ec775e7e9ddbbfe35fc" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.132966 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.141484 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.212899 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpg76\" (UniqueName: \"kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76\") pod \"a3c62584-d1a0-43b1-afe9-ab12f4125396\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.212951 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts\") pod \"a3c62584-d1a0-43b1-afe9-ab12f4125396\" (UID: \"a3c62584-d1a0-43b1-afe9-ab12f4125396\") " Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.213031 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbmvs\" (UniqueName: \"kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs\") pod \"36a1a647-6525-4da5-9e92-5478fc979997\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.213137 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts\") pod \"36a1a647-6525-4da5-9e92-5478fc979997\" (UID: \"36a1a647-6525-4da5-9e92-5478fc979997\") " Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.214045 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3c62584-d1a0-43b1-afe9-ab12f4125396" (UID: "a3c62584-d1a0-43b1-afe9-ab12f4125396"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.214446 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36a1a647-6525-4da5-9e92-5478fc979997" (UID: "36a1a647-6525-4da5-9e92-5478fc979997"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.218075 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs" (OuterVolumeSpecName: "kube-api-access-bbmvs") pod "36a1a647-6525-4da5-9e92-5478fc979997" (UID: "36a1a647-6525-4da5-9e92-5478fc979997"). InnerVolumeSpecName "kube-api-access-bbmvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.224688 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76" (OuterVolumeSpecName: "kube-api-access-xpg76") pod "a3c62584-d1a0-43b1-afe9-ab12f4125396" (UID: "a3c62584-d1a0-43b1-afe9-ab12f4125396"). InnerVolumeSpecName "kube-api-access-xpg76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.316217 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36a1a647-6525-4da5-9e92-5478fc979997-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.316254 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpg76\" (UniqueName: \"kubernetes.io/projected/a3c62584-d1a0-43b1-afe9-ab12f4125396-kube-api-access-xpg76\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.316268 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3c62584-d1a0-43b1-afe9-ab12f4125396-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:58 crc kubenswrapper[4754]: I0105 20:26:58.316277 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbmvs\" (UniqueName: \"kubernetes.io/projected/36a1a647-6525-4da5-9e92-5478fc979997-kube-api-access-bbmvs\") on node \"crc\" DevicePath \"\"" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.002689 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cbf0-account-create-update-wjqq8" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.002700 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" event={"ID":"36a1a647-6525-4da5-9e92-5478fc979997","Type":"ContainerDied","Data":"92669a93f7804dd2197d45adda9921a445fd841aa98aaf6ef944c29a7ea0b118"} Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.003137 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92669a93f7804dd2197d45adda9921a445fd841aa98aaf6ef944c29a7ea0b118" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.002714 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-rwh69" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.038093 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c886d9bd8-wmpb8" podUID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" containerName="console" containerID="cri-o://e7853d01eee293844b9d35c90c8865217e74da99fb8eb6c18391d080dd63af93" gracePeriod=15 Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.159846 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.159902 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.237277 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.346694 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:26:59 crc kubenswrapper[4754]: E0105 20:26:59.346914 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 20:26:59 crc kubenswrapper[4754]: E0105 20:26:59.346940 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 20:26:59 crc kubenswrapper[4754]: E0105 20:26:59.347002 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift podName:64fe8f1d-0c69-4fc8-aac8-c17660c2fed5 nodeName:}" failed. No retries permitted until 2026-01-05 20:27:15.346983557 +0000 UTC m=+1322.056167431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift") pod "swift-storage-0" (UID: "64fe8f1d-0c69-4fc8-aac8-c17660c2fed5") : configmap "swift-ring-files" not found Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.757624 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zpltv"] Jan 05 20:26:59 crc kubenswrapper[4754]: E0105 20:26:59.758357 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a1a647-6525-4da5-9e92-5478fc979997" containerName="mariadb-database-create" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.758389 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a1a647-6525-4da5-9e92-5478fc979997" containerName="mariadb-database-create" Jan 05 20:26:59 crc kubenswrapper[4754]: E0105 20:26:59.758429 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c62584-d1a0-43b1-afe9-ab12f4125396" containerName="mariadb-account-create-update" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.758442 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c62584-d1a0-43b1-afe9-ab12f4125396" containerName="mariadb-account-create-update" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.761285 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c62584-d1a0-43b1-afe9-ab12f4125396" containerName="mariadb-account-create-update" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.761418 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a1a647-6525-4da5-9e92-5478fc979997" containerName="mariadb-database-create" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.763019 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.797416 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zpltv"] Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.865120 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.865815 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvtb\" (UniqueName: \"kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.969453 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvtb\" (UniqueName: \"kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.969596 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.971030 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:26:59 crc kubenswrapper[4754]: I0105 20:26:59.998123 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvtb\" (UniqueName: \"kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb\") pod \"keystone-db-create-zpltv\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " pod="openstack/keystone-db-create-zpltv" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.019558 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2f6c-account-create-update-krtph"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.021111 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.023603 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.038307 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c886d9bd8-wmpb8_74b103ef-1d58-4bb5-80b5-8314ec1df2bc/console/0.log" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.038390 4754 generic.go:334] "Generic (PLEG): container finished" podID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" containerID="e7853d01eee293844b9d35c90c8865217e74da99fb8eb6c18391d080dd63af93" exitCode=2 Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.038466 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c886d9bd8-wmpb8" event={"ID":"74b103ef-1d58-4bb5-80b5-8314ec1df2bc","Type":"ContainerDied","Data":"e7853d01eee293844b9d35c90c8865217e74da99fb8eb6c18391d080dd63af93"} Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.041521 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f6c-account-create-update-krtph"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.071939 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksz4d\" (UniqueName: \"kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.072079 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.087483 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.100337 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gfnpz"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.101688 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.105886 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zpltv" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.129967 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gfnpz"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.161775 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.174736 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.174832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.174935 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gb5\" (UniqueName: \"kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.175043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksz4d\" (UniqueName: \"kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.176339 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.196087 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksz4d\" (UniqueName: \"kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d\") pod \"keystone-2f6c-account-create-update-krtph\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.213526 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6412-account-create-update-jpj4f"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.214917 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.217517 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.225903 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6412-account-create-update-jpj4f"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.276262 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gb5\" (UniqueName: \"kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.276370 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgjw\" (UniqueName: \"kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.276398 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.276746 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.277471 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.297906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gb5\" (UniqueName: \"kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5\") pod \"placement-db-create-gfnpz\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.364345 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.379774 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgjw\" (UniqueName: \"kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.379828 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.380865 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.398190 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgjw\" (UniqueName: \"kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw\") pod \"placement-6412-account-create-update-jpj4f\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.425455 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.518704 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-s4cdh"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.524692 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.539208 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-s4cdh"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.566209 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.585926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.586055 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8kj9\" (UniqueName: \"kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.653834 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7de3-account-create-update-pxbnf"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.656442 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.661363 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.689603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.689657 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8kj9\" (UniqueName: \"kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.689774 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xst\" (UniqueName: \"kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.689912 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.695278 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7de3-account-create-update-pxbnf"] Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.700712 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.745772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8kj9\" (UniqueName: \"kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9\") pod \"glance-db-create-s4cdh\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.812164 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.812598 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6xst\" (UniqueName: \"kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.813209 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.831660 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6xst\" (UniqueName: \"kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst\") pod \"glance-7de3-account-create-update-pxbnf\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:00 crc kubenswrapper[4754]: I0105 20:27:00.847444 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:01 crc kubenswrapper[4754]: I0105 20:27:01.113188 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.061451 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ct9fs" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="registry-server" containerID="cri-o://766d87aa306512213e7c5df874d5e01e8cbfc5e8fca15efb5f8b46ac2239b06b" gracePeriod=2 Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.604205 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz"] Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.606026 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.615488 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz"] Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.680637 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nph82\" (UniqueName: \"kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.680709 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.782800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nph82\" (UniqueName: \"kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.783163 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.783939 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.814531 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-fb57-account-create-update-kjlds"] Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.816315 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.818721 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.821528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nph82\" (UniqueName: \"kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82\") pod \"mysqld-exporter-openstack-cell1-db-create-pdnxz\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.826028 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c886d9bd8-wmpb8_74b103ef-1d58-4bb5-80b5-8314ec1df2bc/console/0.log" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.826110 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.839931 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fb57-account-create-update-kjlds"] Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.961973 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.987535 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.987878 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.987923 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.987999 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.988072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.988201 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksq2h\" (UniqueName: \"kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.988234 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config\") pod \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\" (UID: \"74b103ef-1d58-4bb5-80b5-8314ec1df2bc\") " Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.988775 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca" (OuterVolumeSpecName: "service-ca") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.989042 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bw7\" (UniqueName: \"kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.989269 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.989339 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.989890 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.993165 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config" (OuterVolumeSpecName: "console-config") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.995643 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.995871 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:02 crc kubenswrapper[4754]: I0105 20:27:02.996591 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h" (OuterVolumeSpecName: "kube-api-access-ksq2h") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "kube-api-access-ksq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.006636 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "74b103ef-1d58-4bb5-80b5-8314ec1df2bc" (UID: "74b103ef-1d58-4bb5-80b5-8314ec1df2bc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092615 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bw7\" (UniqueName: \"kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092695 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092772 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksq2h\" (UniqueName: \"kubernetes.io/projected/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-kube-api-access-ksq2h\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092787 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092798 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092806 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092815 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.092824 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74b103ef-1d58-4bb5-80b5-8314ec1df2bc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.093518 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.103697 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c886d9bd8-wmpb8" event={"ID":"74b103ef-1d58-4bb5-80b5-8314ec1df2bc","Type":"ContainerDied","Data":"2911afcb39e72e918dc36cd7479170865cfc9f720f2715eb92c16e1073a42b94"} Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.103745 4754 scope.go:117] "RemoveContainer" containerID="e7853d01eee293844b9d35c90c8865217e74da99fb8eb6c18391d080dd63af93" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.103861 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c886d9bd8-wmpb8" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.151683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerStarted","Data":"521c46edf8526b075d1653e40b8f1d1cbb08156c2a8dc728ff6dfb358d04e44b"} Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.152311 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.163027 4754 generic.go:334] "Generic (PLEG): container finished" podID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerID="766d87aa306512213e7c5df874d5e01e8cbfc5e8fca15efb5f8b46ac2239b06b" exitCode=0 Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.163157 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerDied","Data":"766d87aa306512213e7c5df874d5e01e8cbfc5e8fca15efb5f8b46ac2239b06b"} Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.183386 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerStarted","Data":"baf7c634a969aa0a63f9c70b6ee70cbeb4db04e11becc8c7030f9850939268ee"} Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.183642 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.196432 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bw7\" (UniqueName: \"kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7\") pod \"mysqld-exporter-fb57-account-create-update-kjlds\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.247255 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerStarted","Data":"3dfe593ba24fbe4eb931706d6c55f3e002f19127a5ceeff542df9626a8d2101f"} Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.256203 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.322711 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.346261 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.366988237 podStartE2EDuration="1m8.346241491s" podCreationTimestamp="2026-01-05 20:25:55 +0000 UTC" firstStartedPulling="2026-01-05 20:25:58.666837847 +0000 UTC m=+1245.376021721" lastFinishedPulling="2026-01-05 20:26:22.646091101 +0000 UTC m=+1269.355274975" observedRunningTime="2026-01-05 20:27:03.253565535 +0000 UTC m=+1309.962749409" watchObservedRunningTime="2026-01-05 20:27:03.346241491 +0000 UTC m=+1310.055425365" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.356874 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=43.896979613 podStartE2EDuration="1m8.356854689s" podCreationTimestamp="2026-01-05 20:25:55 +0000 UTC" firstStartedPulling="2026-01-05 20:25:58.209447923 +0000 UTC m=+1244.918631797" lastFinishedPulling="2026-01-05 20:26:22.669322999 +0000 UTC m=+1269.378506873" observedRunningTime="2026-01-05 20:27:03.287185895 +0000 UTC m=+1309.996369789" watchObservedRunningTime="2026-01-05 20:27:03.356854689 +0000 UTC m=+1310.066038563" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.456549 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.602911663 podStartE2EDuration="1m7.456529418s" podCreationTimestamp="2026-01-05 20:25:56 +0000 UTC" firstStartedPulling="2026-01-05 20:25:58.8231947 +0000 UTC m=+1245.532378574" lastFinishedPulling="2026-01-05 20:26:22.676812455 +0000 UTC m=+1269.385996329" observedRunningTime="2026-01-05 20:27:03.360002881 +0000 UTC m=+1310.069186745" watchObservedRunningTime="2026-01-05 20:27:03.456529418 +0000 UTC m=+1310.165713292" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.483178 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.509486 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6c886d9bd8-wmpb8"] Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.509756 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.609840 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" path="/var/lib/kubelet/pods/74b103ef-1d58-4bb5-80b5-8314ec1df2bc/volumes" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.625900 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities\") pod \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.626374 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxtm9\" (UniqueName: \"kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9\") pod \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.626442 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content\") pod \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\" (UID: \"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2\") " Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.630991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities" (OuterVolumeSpecName: "utilities") pod "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" (UID: "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.660648 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9" (OuterVolumeSpecName: "kube-api-access-bxtm9") pod "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" (UID: "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2"). InnerVolumeSpecName "kube-api-access-bxtm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.753334 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:03 crc kubenswrapper[4754]: I0105 20:27:03.753373 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxtm9\" (UniqueName: \"kubernetes.io/projected/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-kube-api-access-bxtm9\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.047055 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" (UID: "a9ad3a07-4d41-42c2-afcf-568e6c3b83e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.090699 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.265891 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerStarted","Data":"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872"} Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.267687 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.284849 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct9fs" event={"ID":"a9ad3a07-4d41-42c2-afcf-568e6c3b83e2","Type":"ContainerDied","Data":"cb5aea81555d806cc9828e9f6722119c86aab520c29bf358b53f1719b998d915"} Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.284902 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct9fs" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.284919 4754 scope.go:117] "RemoveContainer" containerID="766d87aa306512213e7c5df874d5e01e8cbfc5e8fca15efb5f8b46ac2239b06b" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.293080 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l252x" event={"ID":"2dedf888-afb9-42e6-80fa-3135a67787db","Type":"ContainerStarted","Data":"32e1ec7368e540644c53a45267e7c41a680d0a97103b6a339e9968c4b7217945"} Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.335392 4754 scope.go:117] "RemoveContainer" containerID="40b35701b5376d94d268dc1ff540e21d1a3fdd5ec8b25e94c79478918c128630" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.343007 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.734473455 podStartE2EDuration="1m9.342984294s" podCreationTimestamp="2026-01-05 20:25:55 +0000 UTC" firstStartedPulling="2026-01-05 20:25:58.848252796 +0000 UTC m=+1245.557436670" lastFinishedPulling="2026-01-05 20:26:22.456763645 +0000 UTC m=+1269.165947509" observedRunningTime="2026-01-05 20:27:04.303132091 +0000 UTC m=+1311.012315965" watchObservedRunningTime="2026-01-05 20:27:04.342984294 +0000 UTC m=+1311.052168168" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.343421 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-l252x" podStartSLOduration=5.81459439 podStartE2EDuration="17.343416475s" podCreationTimestamp="2026-01-05 20:26:47 +0000 UTC" firstStartedPulling="2026-01-05 20:26:50.802563288 +0000 UTC m=+1297.511747162" lastFinishedPulling="2026-01-05 20:27:02.331385373 +0000 UTC m=+1309.040569247" observedRunningTime="2026-01-05 20:27:04.335753045 +0000 UTC m=+1311.044936919" watchObservedRunningTime="2026-01-05 20:27:04.343416475 +0000 UTC m=+1311.052600349" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.349985 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerStarted","Data":"0dd28c2022baee6376a36bbbc781b1f1ae63b45c6a605c28b01f3a4d6112a539"} Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.423661 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gfnpz"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.432530 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.436111 4754 scope.go:117] "RemoveContainer" containerID="34d6ce69e979eae21cd0a2feed9806ff1eb9a62782b72da609ee2e88a853df5e" Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.445209 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ct9fs"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.454020 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.467716 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zpltv"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.485347 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7de3-account-create-update-pxbnf"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.506217 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b8n94"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.520246 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-s4cdh"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.528519 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6412-account-create-update-jpj4f"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.621135 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f6c-account-create-update-krtph"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.774731 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz"] Jan 05 20:27:04 crc kubenswrapper[4754]: I0105 20:27:04.804339 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fb57-account-create-update-kjlds"] Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.363065 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b8n94" event={"ID":"0bd78c81-6fde-4c20-9771-cf86313b8f18","Type":"ContainerStarted","Data":"eb3621588caeb2eebfee6700ba1050d6cca879a0af3cee873d3dc13c540bfb2a"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.365632 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" event={"ID":"8873df75-8512-43e0-8d94-d4e45e6904d4","Type":"ContainerStarted","Data":"393ad1a253b235d7dfdc6ae93a5eed961e4d8953efc2fba69d93d3c938ac9acb"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.367501 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7de3-account-create-update-pxbnf" event={"ID":"9ea55880-d313-447e-a193-e69f30a74375","Type":"ContainerStarted","Data":"6c4b1e84aacb438342d711e3e53c38a4200c83631bad9f6e6b064d4557e0b071"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.368314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6412-account-create-update-jpj4f" event={"ID":"3b3003ea-ab70-4762-b50d-bf15c06ea9a7","Type":"ContainerStarted","Data":"55ee05fa5fd19a4ab02d58200ded5f8b9ee705fc05c465f3234fa95cc26c9c30"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.370385 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cdh" event={"ID":"58066c5a-8387-449d-9004-f2e6a1d37e53","Type":"ContainerStarted","Data":"de7d0d16e756640c459696c9308cc506233663ce550297519b67fc27022dd2c7"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.372151 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" event={"ID":"177b09ff-4291-4ef0-93f7-66a03d8c0fe2","Type":"ContainerStarted","Data":"d2e16da2eede30ec964718def778475dde7ec04e3c815262708e36163ca791d5"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.374499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f6c-account-create-update-krtph" event={"ID":"cf5c2131-a9a1-4345-8ae8-069fba0812a5","Type":"ContainerStarted","Data":"6e03853d6ffd41c0c210317f77b4356d63140ff7873995466fc4e4db58782a88"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.376124 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zpltv" event={"ID":"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd","Type":"ContainerStarted","Data":"80990babaf30d6c9887cf31bd279164dcf6bfa0d97fa45fca066806cca22551e"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.378927 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd134f60-e97c-487b-be9e-c356c7478c21","Type":"ContainerStarted","Data":"45bee56441c079dc3e8ca7e089e22e6a755f14ba50ce153294e488cd310f3442"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.382878 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gfnpz" event={"ID":"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed","Type":"ContainerStarted","Data":"a24054b0a175c56e1337c3b973ce7b6f5709f7532a441aee579688787f0b4ee5"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.382903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gfnpz" event={"ID":"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed","Type":"ContainerStarted","Data":"67a781f7bf19e59a2dcfa85edf4c97d7ac5c49ae7da64bd32cfca1acb05fe000"} Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.407569 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-gfnpz" podStartSLOduration=5.407548782 podStartE2EDuration="5.407548782s" podCreationTimestamp="2026-01-05 20:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:05.400060226 +0000 UTC m=+1312.109244100" watchObservedRunningTime="2026-01-05 20:27:05.407548782 +0000 UTC m=+1312.116732656" Jan 05 20:27:05 crc kubenswrapper[4754]: I0105 20:27:05.613550 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" path="/var/lib/kubelet/pods/a9ad3a07-4d41-42c2-afcf-568e6c3b83e2/volumes" Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.423285 4754 generic.go:334] "Generic (PLEG): container finished" podID="0bd78c81-6fde-4c20-9771-cf86313b8f18" containerID="e3b601f5b9c17bf008e26967bd4ce50d75528faf76c4af07b43a2fabc61bc22c" exitCode=0 Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.423397 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b8n94" event={"ID":"0bd78c81-6fde-4c20-9771-cf86313b8f18","Type":"ContainerDied","Data":"e3b601f5b9c17bf008e26967bd4ce50d75528faf76c4af07b43a2fabc61bc22c"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.427375 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" event={"ID":"8873df75-8512-43e0-8d94-d4e45e6904d4","Type":"ContainerStarted","Data":"b2b0b3aac6efb14d3b8ce9a40a2bc576aab7239fa6fefc0edbc919111a5c01ca"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.435041 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zpltv" event={"ID":"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd","Type":"ContainerStarted","Data":"4caad7c30f61dbd6b6c6ef02f12ab054d334bdf5a12ea0a6f4bcb0aaf737faf4"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.442752 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7de3-account-create-update-pxbnf" event={"ID":"9ea55880-d313-447e-a193-e69f30a74375","Type":"ContainerStarted","Data":"b787e2af61560104d1cac0b9ac110c21dc4156c7c157ab5c6f8885459197baa9"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.446305 4754 generic.go:334] "Generic (PLEG): container finished" podID="4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" containerID="a24054b0a175c56e1337c3b973ce7b6f5709f7532a441aee579688787f0b4ee5" exitCode=0 Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.446369 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gfnpz" event={"ID":"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed","Type":"ContainerDied","Data":"a24054b0a175c56e1337c3b973ce7b6f5709f7532a441aee579688787f0b4ee5"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.447797 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" event={"ID":"177b09ff-4291-4ef0-93f7-66a03d8c0fe2","Type":"ContainerStarted","Data":"cc1c34ff1f03bb075858d775b280315d819cda46e7ffc141430b72fde14c565a"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.456053 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f6c-account-create-update-krtph" event={"ID":"cf5c2131-a9a1-4345-8ae8-069fba0812a5","Type":"ContainerStarted","Data":"d65036cacc8453c4f8ee49771bcc5fa30e35a937e206c23e5f20583f59fc87e6"} Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.485750 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" podStartSLOduration=4.485722467 podStartE2EDuration="4.485722467s" podCreationTimestamp="2026-01-05 20:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:06.479929585 +0000 UTC m=+1313.189113459" watchObservedRunningTime="2026-01-05 20:27:06.485722467 +0000 UTC m=+1313.194906341" Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.505586 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2f6c-account-create-update-krtph" podStartSLOduration=7.505554976 podStartE2EDuration="7.505554976s" podCreationTimestamp="2026-01-05 20:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:06.5015194 +0000 UTC m=+1313.210703274" watchObservedRunningTime="2026-01-05 20:27:06.505554976 +0000 UTC m=+1313.214738850" Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.547674 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zpltv" podStartSLOduration=7.547643408 podStartE2EDuration="7.547643408s" podCreationTimestamp="2026-01-05 20:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:06.528977259 +0000 UTC m=+1313.238161133" watchObservedRunningTime="2026-01-05 20:27:06.547643408 +0000 UTC m=+1313.256827282" Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.573118 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7de3-account-create-update-pxbnf" podStartSLOduration=6.573092084 podStartE2EDuration="6.573092084s" podCreationTimestamp="2026-01-05 20:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:06.565955247 +0000 UTC m=+1313.275139121" watchObservedRunningTime="2026-01-05 20:27:06.573092084 +0000 UTC m=+1313.282275958" Jan 05 20:27:06 crc kubenswrapper[4754]: I0105 20:27:06.597072 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" podStartSLOduration=4.597049791 podStartE2EDuration="4.597049791s" podCreationTimestamp="2026-01-05 20:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:06.584499033 +0000 UTC m=+1313.293682907" watchObservedRunningTime="2026-01-05 20:27:06.597049791 +0000 UTC m=+1313.306233665" Jan 05 20:27:07 crc kubenswrapper[4754]: I0105 20:27:07.466919 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6412-account-create-update-jpj4f" event={"ID":"3b3003ea-ab70-4762-b50d-bf15c06ea9a7","Type":"ContainerStarted","Data":"6c5ff6faa38623fc188aa6f4d099d68de166aeb6769816fbefbbafdaa495ba8d"} Jan 05 20:27:07 crc kubenswrapper[4754]: I0105 20:27:07.470100 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cdh" event={"ID":"58066c5a-8387-449d-9004-f2e6a1d37e53","Type":"ContainerStarted","Data":"49033cb3a110591ebff17507b044a07ee8d0a70d136d57e7e5dd097082e869c6"} Jan 05 20:27:07 crc kubenswrapper[4754]: I0105 20:27:07.473269 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerStarted","Data":"77d9d0f14982754b30237f7f0277da2cd7bc39cd761d56e226f5475e5a348508"} Jan 05 20:27:07 crc kubenswrapper[4754]: I0105 20:27:07.496424 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6412-account-create-update-jpj4f" podStartSLOduration=7.496402225 podStartE2EDuration="7.496402225s" podCreationTimestamp="2026-01-05 20:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:07.487993205 +0000 UTC m=+1314.197177089" watchObservedRunningTime="2026-01-05 20:27:07.496402225 +0000 UTC m=+1314.205586099" Jan 05 20:27:07 crc kubenswrapper[4754]: I0105 20:27:07.511984 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-s4cdh" podStartSLOduration=7.511966492 podStartE2EDuration="7.511966492s" podCreationTimestamp="2026-01-05 20:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:07.510069673 +0000 UTC m=+1314.219253547" watchObservedRunningTime="2026-01-05 20:27:07.511966492 +0000 UTC m=+1314.221150366" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.086893 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b8n94" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.095158 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.247927 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts\") pod \"0bd78c81-6fde-4c20-9771-cf86313b8f18\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.248017 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwszt\" (UniqueName: \"kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt\") pod \"0bd78c81-6fde-4c20-9771-cf86313b8f18\" (UID: \"0bd78c81-6fde-4c20-9771-cf86313b8f18\") " Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.248173 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts\") pod \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.248204 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gb5\" (UniqueName: \"kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5\") pod \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\" (UID: \"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed\") " Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.249276 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bd78c81-6fde-4c20-9771-cf86313b8f18" (UID: "0bd78c81-6fde-4c20-9771-cf86313b8f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.249670 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" (UID: "4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.254175 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt" (OuterVolumeSpecName: "kube-api-access-rwszt") pod "0bd78c81-6fde-4c20-9771-cf86313b8f18" (UID: "0bd78c81-6fde-4c20-9771-cf86313b8f18"). InnerVolumeSpecName "kube-api-access-rwszt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.254771 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5" (OuterVolumeSpecName: "kube-api-access-v7gb5") pod "4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" (UID: "4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed"). InnerVolumeSpecName "kube-api-access-v7gb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.350720 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7gb5\" (UniqueName: \"kubernetes.io/projected/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-kube-api-access-v7gb5\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.350760 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bd78c81-6fde-4c20-9771-cf86313b8f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.350774 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwszt\" (UniqueName: \"kubernetes.io/projected/0bd78c81-6fde-4c20-9771-cf86313b8f18-kube-api-access-rwszt\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.350786 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.482567 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b8n94" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.483712 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b8n94" event={"ID":"0bd78c81-6fde-4c20-9771-cf86313b8f18","Type":"ContainerDied","Data":"eb3621588caeb2eebfee6700ba1050d6cca879a0af3cee873d3dc13c540bfb2a"} Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.483755 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3621588caeb2eebfee6700ba1050d6cca879a0af3cee873d3dc13c540bfb2a" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.484608 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd134f60-e97c-487b-be9e-c356c7478c21","Type":"ContainerStarted","Data":"709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6"} Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.486094 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gfnpz" event={"ID":"4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed","Type":"ContainerDied","Data":"67a781f7bf19e59a2dcfa85edf4c97d7ac5c49ae7da64bd32cfca1acb05fe000"} Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.486129 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a781f7bf19e59a2dcfa85edf4c97d7ac5c49ae7da64bd32cfca1acb05fe000" Jan 05 20:27:08 crc kubenswrapper[4754]: I0105 20:27:08.486177 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gfnpz" Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.497331 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf5c2131-a9a1-4345-8ae8-069fba0812a5" containerID="d65036cacc8453c4f8ee49771bcc5fa30e35a937e206c23e5f20583f59fc87e6" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.497406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f6c-account-create-update-krtph" event={"ID":"cf5c2131-a9a1-4345-8ae8-069fba0812a5","Type":"ContainerDied","Data":"d65036cacc8453c4f8ee49771bcc5fa30e35a937e206c23e5f20583f59fc87e6"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.500727 4754 generic.go:334] "Generic (PLEG): container finished" podID="8873df75-8512-43e0-8d94-d4e45e6904d4" containerID="b2b0b3aac6efb14d3b8ce9a40a2bc576aab7239fa6fefc0edbc919111a5c01ca" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.500871 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" event={"ID":"8873df75-8512-43e0-8d94-d4e45e6904d4","Type":"ContainerDied","Data":"b2b0b3aac6efb14d3b8ce9a40a2bc576aab7239fa6fefc0edbc919111a5c01ca"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.504879 4754 generic.go:334] "Generic (PLEG): container finished" podID="745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" containerID="4caad7c30f61dbd6b6c6ef02f12ab054d334bdf5a12ea0a6f4bcb0aaf737faf4" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.504960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zpltv" event={"ID":"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd","Type":"ContainerDied","Data":"4caad7c30f61dbd6b6c6ef02f12ab054d334bdf5a12ea0a6f4bcb0aaf737faf4"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.507718 4754 generic.go:334] "Generic (PLEG): container finished" podID="9ea55880-d313-447e-a193-e69f30a74375" containerID="b787e2af61560104d1cac0b9ac110c21dc4156c7c157ab5c6f8885459197baa9" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.507788 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7de3-account-create-update-pxbnf" event={"ID":"9ea55880-d313-447e-a193-e69f30a74375","Type":"ContainerDied","Data":"b787e2af61560104d1cac0b9ac110c21dc4156c7c157ab5c6f8885459197baa9"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.513506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd134f60-e97c-487b-be9e-c356c7478c21","Type":"ContainerStarted","Data":"6a20a269180c719f1ce208bea9a1cc4a02340db07c78d6bf0adfa633a3a87dc2"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.513730 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.520772 4754 generic.go:334] "Generic (PLEG): container finished" podID="3b3003ea-ab70-4762-b50d-bf15c06ea9a7" containerID="6c5ff6faa38623fc188aa6f4d099d68de166aeb6769816fbefbbafdaa495ba8d" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.520866 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6412-account-create-update-jpj4f" event={"ID":"3b3003ea-ab70-4762-b50d-bf15c06ea9a7","Type":"ContainerDied","Data":"6c5ff6faa38623fc188aa6f4d099d68de166aeb6769816fbefbbafdaa495ba8d"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.523893 4754 generic.go:334] "Generic (PLEG): container finished" podID="58066c5a-8387-449d-9004-f2e6a1d37e53" containerID="49033cb3a110591ebff17507b044a07ee8d0a70d136d57e7e5dd097082e869c6" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.523980 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cdh" event={"ID":"58066c5a-8387-449d-9004-f2e6a1d37e53","Type":"ContainerDied","Data":"49033cb3a110591ebff17507b044a07ee8d0a70d136d57e7e5dd097082e869c6"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.527796 4754 generic.go:334] "Generic (PLEG): container finished" podID="177b09ff-4291-4ef0-93f7-66a03d8c0fe2" containerID="cc1c34ff1f03bb075858d775b280315d819cda46e7ffc141430b72fde14c565a" exitCode=0 Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.527864 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" event={"ID":"177b09ff-4291-4ef0-93f7-66a03d8c0fe2","Type":"ContainerDied","Data":"cc1c34ff1f03bb075858d775b280315d819cda46e7ffc141430b72fde14c565a"} Jan 05 20:27:09 crc kubenswrapper[4754]: I0105 20:27:09.566677 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=12.802288682 podStartE2EDuration="14.56663975s" podCreationTimestamp="2026-01-05 20:26:55 +0000 UTC" firstStartedPulling="2026-01-05 20:27:04.487394714 +0000 UTC m=+1311.196578588" lastFinishedPulling="2026-01-05 20:27:06.251745782 +0000 UTC m=+1312.960929656" observedRunningTime="2026-01-05 20:27:09.560685774 +0000 UTC m=+1316.269869638" watchObservedRunningTime="2026-01-05 20:27:09.56663975 +0000 UTC m=+1316.275823624" Jan 05 20:27:11 crc kubenswrapper[4754]: I0105 20:27:11.451272 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pbz4n" podUID="5212dab3-1e5c-48b6-a710-f3551ab2ceaf" containerName="ovn-controller" probeResult="failure" output=< Jan 05 20:27:11 crc kubenswrapper[4754]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 05 20:27:11 crc kubenswrapper[4754]: > Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.149239 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.155812 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zpltv" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.161933 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.170405 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.191381 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.204898 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.227619 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.241696 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts\") pod \"9ea55880-d313-447e-a193-e69f30a74375\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242172 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2bw7\" (UniqueName: \"kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7\") pod \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242311 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvtb\" (UniqueName: \"kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb\") pod \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242358 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ea55880-d313-447e-a193-e69f30a74375" (UID: "9ea55880-d313-447e-a193-e69f30a74375"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts\") pod \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242661 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksz4d\" (UniqueName: \"kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d\") pod \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\" (UID: \"cf5c2131-a9a1-4345-8ae8-069fba0812a5\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242700 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6xst\" (UniqueName: \"kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst\") pod \"9ea55880-d313-447e-a193-e69f30a74375\" (UID: \"9ea55880-d313-447e-a193-e69f30a74375\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242727 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts\") pod \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\" (UID: \"177b09ff-4291-4ef0-93f7-66a03d8c0fe2\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.242792 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts\") pod \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\" (UID: \"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.243511 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea55880-d313-447e-a193-e69f30a74375-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.244082 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" (UID: "745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.248410 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "177b09ff-4291-4ef0-93f7-66a03d8c0fe2" (UID: "177b09ff-4291-4ef0-93f7-66a03d8c0fe2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.253901 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d" (OuterVolumeSpecName: "kube-api-access-ksz4d") pod "cf5c2131-a9a1-4345-8ae8-069fba0812a5" (UID: "cf5c2131-a9a1-4345-8ae8-069fba0812a5"). InnerVolumeSpecName "kube-api-access-ksz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.254342 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf5c2131-a9a1-4345-8ae8-069fba0812a5" (UID: "cf5c2131-a9a1-4345-8ae8-069fba0812a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.257531 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst" (OuterVolumeSpecName: "kube-api-access-b6xst") pod "9ea55880-d313-447e-a193-e69f30a74375" (UID: "9ea55880-d313-447e-a193-e69f30a74375"). InnerVolumeSpecName "kube-api-access-b6xst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.260915 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb" (OuterVolumeSpecName: "kube-api-access-hbvtb") pod "745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" (UID: "745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd"). InnerVolumeSpecName "kube-api-access-hbvtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.262790 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7" (OuterVolumeSpecName: "kube-api-access-s2bw7") pod "177b09ff-4291-4ef0-93f7-66a03d8c0fe2" (UID: "177b09ff-4291-4ef0-93f7-66a03d8c0fe2"). InnerVolumeSpecName "kube-api-access-s2bw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345253 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts\") pod \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345509 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pgjw\" (UniqueName: \"kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw\") pod \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\" (UID: \"3b3003ea-ab70-4762-b50d-bf15c06ea9a7\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345603 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts\") pod \"58066c5a-8387-449d-9004-f2e6a1d37e53\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345724 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts\") pod \"8873df75-8512-43e0-8d94-d4e45e6904d4\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345781 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8kj9\" (UniqueName: \"kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9\") pod \"58066c5a-8387-449d-9004-f2e6a1d37e53\" (UID: \"58066c5a-8387-449d-9004-f2e6a1d37e53\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.345857 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nph82\" (UniqueName: \"kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82\") pod \"8873df75-8512-43e0-8d94-d4e45e6904d4\" (UID: \"8873df75-8512-43e0-8d94-d4e45e6904d4\") " Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346515 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6xst\" (UniqueName: \"kubernetes.io/projected/9ea55880-d313-447e-a193-e69f30a74375-kube-api-access-b6xst\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346543 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346559 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346572 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2bw7\" (UniqueName: \"kubernetes.io/projected/177b09ff-4291-4ef0-93f7-66a03d8c0fe2-kube-api-access-s2bw7\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346583 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvtb\" (UniqueName: \"kubernetes.io/projected/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd-kube-api-access-hbvtb\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346596 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf5c2131-a9a1-4345-8ae8-069fba0812a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346609 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksz4d\" (UniqueName: \"kubernetes.io/projected/cf5c2131-a9a1-4345-8ae8-069fba0812a5-kube-api-access-ksz4d\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.346703 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58066c5a-8387-449d-9004-f2e6a1d37e53" (UID: "58066c5a-8387-449d-9004-f2e6a1d37e53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.347150 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b3003ea-ab70-4762-b50d-bf15c06ea9a7" (UID: "3b3003ea-ab70-4762-b50d-bf15c06ea9a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.347766 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8873df75-8512-43e0-8d94-d4e45e6904d4" (UID: "8873df75-8512-43e0-8d94-d4e45e6904d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.353562 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw" (OuterVolumeSpecName: "kube-api-access-7pgjw") pod "3b3003ea-ab70-4762-b50d-bf15c06ea9a7" (UID: "3b3003ea-ab70-4762-b50d-bf15c06ea9a7"). InnerVolumeSpecName "kube-api-access-7pgjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.357285 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9" (OuterVolumeSpecName: "kube-api-access-t8kj9") pod "58066c5a-8387-449d-9004-f2e6a1d37e53" (UID: "58066c5a-8387-449d-9004-f2e6a1d37e53"). InnerVolumeSpecName "kube-api-access-t8kj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.358697 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82" (OuterVolumeSpecName: "kube-api-access-nph82") pod "8873df75-8512-43e0-8d94-d4e45e6904d4" (UID: "8873df75-8512-43e0-8d94-d4e45e6904d4"). InnerVolumeSpecName "kube-api-access-nph82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448172 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pgjw\" (UniqueName: \"kubernetes.io/projected/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-kube-api-access-7pgjw\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448217 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58066c5a-8387-449d-9004-f2e6a1d37e53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448227 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8873df75-8512-43e0-8d94-d4e45e6904d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448237 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8kj9\" (UniqueName: \"kubernetes.io/projected/58066c5a-8387-449d-9004-f2e6a1d37e53-kube-api-access-t8kj9\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448246 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nph82\" (UniqueName: \"kubernetes.io/projected/8873df75-8512-43e0-8d94-d4e45e6904d4-kube-api-access-nph82\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.448255 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3003ea-ab70-4762-b50d-bf15c06ea9a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.576496 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zpltv" event={"ID":"745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd","Type":"ContainerDied","Data":"80990babaf30d6c9887cf31bd279164dcf6bfa0d97fa45fca066806cca22551e"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.576537 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80990babaf30d6c9887cf31bd279164dcf6bfa0d97fa45fca066806cca22551e" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.576605 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zpltv" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.587156 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerStarted","Data":"a0f13466d14351031aa18b6e6fc8c4a409d534f5e6bb85da22136fe8cf61e29f"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.589639 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" event={"ID":"177b09ff-4291-4ef0-93f7-66a03d8c0fe2","Type":"ContainerDied","Data":"d2e16da2eede30ec964718def778475dde7ec04e3c815262708e36163ca791d5"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.589683 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e16da2eede30ec964718def778475dde7ec04e3c815262708e36163ca791d5" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.589688 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fb57-account-create-update-kjlds" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.601135 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.601156 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz" event={"ID":"8873df75-8512-43e0-8d94-d4e45e6904d4","Type":"ContainerDied","Data":"393ad1a253b235d7dfdc6ae93a5eed961e4d8953efc2fba69d93d3c938ac9acb"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.601536 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="393ad1a253b235d7dfdc6ae93a5eed961e4d8953efc2fba69d93d3c938ac9acb" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.602824 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cdh" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.602833 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cdh" event={"ID":"58066c5a-8387-449d-9004-f2e6a1d37e53","Type":"ContainerDied","Data":"de7d0d16e756640c459696c9308cc506233663ce550297519b67fc27022dd2c7"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.602865 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de7d0d16e756640c459696c9308cc506233663ce550297519b67fc27022dd2c7" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.605168 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f6c-account-create-update-krtph" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.605176 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f6c-account-create-update-krtph" event={"ID":"cf5c2131-a9a1-4345-8ae8-069fba0812a5","Type":"ContainerDied","Data":"6e03853d6ffd41c0c210317f77b4356d63140ff7873995466fc4e4db58782a88"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.605229 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e03853d6ffd41c0c210317f77b4356d63140ff7873995466fc4e4db58782a88" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.607400 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7de3-account-create-update-pxbnf" event={"ID":"9ea55880-d313-447e-a193-e69f30a74375","Type":"ContainerDied","Data":"6c4b1e84aacb438342d711e3e53c38a4200c83631bad9f6e6b064d4557e0b071"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.607452 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4b1e84aacb438342d711e3e53c38a4200c83631bad9f6e6b064d4557e0b071" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.607544 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7de3-account-create-update-pxbnf" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.614615 4754 generic.go:334] "Generic (PLEG): container finished" podID="2dedf888-afb9-42e6-80fa-3135a67787db" containerID="32e1ec7368e540644c53a45267e7c41a680d0a97103b6a339e9968c4b7217945" exitCode=0 Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.614805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l252x" event={"ID":"2dedf888-afb9-42e6-80fa-3135a67787db","Type":"ContainerDied","Data":"32e1ec7368e540644c53a45267e7c41a680d0a97103b6a339e9968c4b7217945"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.619072 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.281982426 podStartE2EDuration="1m10.619056347s" podCreationTimestamp="2026-01-05 20:26:02 +0000 UTC" firstStartedPulling="2026-01-05 20:26:24.65797818 +0000 UTC m=+1271.367162054" lastFinishedPulling="2026-01-05 20:27:11.995052091 +0000 UTC m=+1318.704235975" observedRunningTime="2026-01-05 20:27:12.615112663 +0000 UTC m=+1319.324296547" watchObservedRunningTime="2026-01-05 20:27:12.619056347 +0000 UTC m=+1319.328240221" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.622769 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6412-account-create-update-jpj4f" event={"ID":"3b3003ea-ab70-4762-b50d-bf15c06ea9a7","Type":"ContainerDied","Data":"55ee05fa5fd19a4ab02d58200ded5f8b9ee705fc05c465f3234fa95cc26c9c30"} Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.622821 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ee05fa5fd19a4ab02d58200ded5f8b9ee705fc05c465f3234fa95cc26c9c30" Jan 05 20:27:12 crc kubenswrapper[4754]: I0105 20:27:12.622902 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6412-account-create-update-jpj4f" Jan 05 20:27:13 crc kubenswrapper[4754]: I0105 20:27:13.663024 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b8n94"] Jan 05 20:27:13 crc kubenswrapper[4754]: I0105 20:27:13.670736 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b8n94"] Jan 05 20:27:13 crc kubenswrapper[4754]: I0105 20:27:13.847053 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.048686 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083383 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083453 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phtc\" (UniqueName: \"kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083498 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083779 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083810 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083846 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.083866 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf\") pod \"2dedf888-afb9-42e6-80fa-3135a67787db\" (UID: \"2dedf888-afb9-42e6-80fa-3135a67787db\") " Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.084273 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.084379 4754 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2dedf888-afb9-42e6-80fa-3135a67787db-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.084782 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.103720 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc" (OuterVolumeSpecName: "kube-api-access-8phtc") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "kube-api-access-8phtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.109246 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.119821 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.122922 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.125664 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts" (OuterVolumeSpecName: "scripts") pod "2dedf888-afb9-42e6-80fa-3135a67787db" (UID: "2dedf888-afb9-42e6-80fa-3135a67787db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186654 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186681 4754 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186693 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phtc\" (UniqueName: \"kubernetes.io/projected/2dedf888-afb9-42e6-80fa-3135a67787db-kube-api-access-8phtc\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186703 4754 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2dedf888-afb9-42e6-80fa-3135a67787db-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186714 4754 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.186722 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dedf888-afb9-42e6-80fa-3135a67787db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.667396 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l252x" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.671555 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l252x" event={"ID":"2dedf888-afb9-42e6-80fa-3135a67787db","Type":"ContainerDied","Data":"7a8a4fccddcd6e2849d174689815cc7ad18592080d81f103310beb0e038c5e6d"} Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:14.671600 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8a4fccddcd6e2849d174689815cc7ad18592080d81f103310beb0e038c5e6d" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:15.412704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:15.422446 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/64fe8f1d-0c69-4fc8-aac8-c17660c2fed5-etc-swift\") pod \"swift-storage-0\" (UID: \"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5\") " pod="openstack/swift-storage-0" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:15.532317 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 20:27:15 crc kubenswrapper[4754]: I0105 20:27:15.611066 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd78c81-6fde-4c20-9771-cf86313b8f18" path="/var/lib/kubelet/pods/0bd78c81-6fde-4c20-9771-cf86313b8f18/volumes" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.073451 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-j8qkw"] Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074458 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" containerName="console" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074491 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" containerName="console" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074510 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074521 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074539 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58066c5a-8387-449d-9004-f2e6a1d37e53" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074551 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="58066c5a-8387-449d-9004-f2e6a1d37e53" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074574 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="extract-utilities" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074585 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="extract-utilities" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074611 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5c2131-a9a1-4345-8ae8-069fba0812a5" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074622 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5c2131-a9a1-4345-8ae8-069fba0812a5" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074644 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3003ea-ab70-4762-b50d-bf15c06ea9a7" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074656 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3003ea-ab70-4762-b50d-bf15c06ea9a7" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074679 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dedf888-afb9-42e6-80fa-3135a67787db" containerName="swift-ring-rebalance" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074692 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dedf888-afb9-42e6-80fa-3135a67787db" containerName="swift-ring-rebalance" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074705 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="extract-content" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074715 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="extract-content" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074734 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="registry-server" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074747 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="registry-server" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074775 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd78c81-6fde-4c20-9771-cf86313b8f18" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074786 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd78c81-6fde-4c20-9771-cf86313b8f18" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074803 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074814 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074829 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8873df75-8512-43e0-8d94-d4e45e6904d4" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074840 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8873df75-8512-43e0-8d94-d4e45e6904d4" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074864 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea55880-d313-447e-a193-e69f30a74375" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074872 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea55880-d313-447e-a193-e69f30a74375" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: E0105 20:27:16.074883 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b09ff-4291-4ef0-93f7-66a03d8c0fe2" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.074892 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b09ff-4291-4ef0-93f7-66a03d8c0fe2" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075142 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b103ef-1d58-4bb5-80b5-8314ec1df2bc" containerName="console" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075170 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dedf888-afb9-42e6-80fa-3135a67787db" containerName="swift-ring-rebalance" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075192 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075208 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8873df75-8512-43e0-8d94-d4e45e6904d4" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075220 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ad3a07-4d41-42c2-afcf-568e6c3b83e2" containerName="registry-server" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075232 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="177b09ff-4291-4ef0-93f7-66a03d8c0fe2" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075245 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075258 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5c2131-a9a1-4345-8ae8-069fba0812a5" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075275 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd78c81-6fde-4c20-9771-cf86313b8f18" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075285 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea55880-d313-447e-a193-e69f30a74375" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075433 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="58066c5a-8387-449d-9004-f2e6a1d37e53" containerName="mariadb-database-create" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.075447 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3003ea-ab70-4762-b50d-bf15c06ea9a7" containerName="mariadb-account-create-update" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.076595 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.087005 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.087013 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vjz5d" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.090029 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j8qkw"] Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.127283 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.127340 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.127711 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.127862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcsz\" (UniqueName: \"kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.230027 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.230119 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcsz\" (UniqueName: \"kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.230156 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.230180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.237586 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.248226 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.250810 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.251038 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.253000 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcsz\" (UniqueName: \"kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz\") pod \"glance-db-sync-j8qkw\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: W0105 20:27:16.254368 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fe8f1d_0c69_4fc8_aac8_c17660c2fed5.slice/crio-d7bf8fe6721fd58dcde7a9d2ed1a5199fd63b63685b0f8e910a59aa125783423 WatchSource:0}: Error finding container d7bf8fe6721fd58dcde7a9d2ed1a5199fd63b63685b0f8e910a59aa125783423: Status 404 returned error can't find the container with id d7bf8fe6721fd58dcde7a9d2ed1a5199fd63b63685b0f8e910a59aa125783423 Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.256764 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.411169 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.458213 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pbz4n" podUID="5212dab3-1e5c-48b6-a710-f3551ab2ceaf" containerName="ovn-controller" probeResult="failure" output=< Jan 05 20:27:16 crc kubenswrapper[4754]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 05 20:27:16 crc kubenswrapper[4754]: > Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.472091 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.493364 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-psnm7" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.703717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"d7bf8fe6721fd58dcde7a9d2ed1a5199fd63b63685b0f8e910a59aa125783423"} Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.706691 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pbz4n-config-5c9s9"] Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.710430 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.714142 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.720125 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n-config-5c9s9"] Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744551 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744628 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7jr\" (UniqueName: \"kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744668 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744747 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744824 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.744856 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.845846 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.845954 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.845982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846149 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7jr\" (UniqueName: \"kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846708 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846717 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.846737 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.847117 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.848065 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:16 crc kubenswrapper[4754]: I0105 20:27:16.863053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7jr\" (UniqueName: \"kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr\") pod \"ovn-controller-pbz4n-config-5c9s9\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.019003 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j8qkw"] Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.037240 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.518154 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.529662 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.540475 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.569905 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n-config-5c9s9"] Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.609466 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.738160 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-5c9s9" event={"ID":"1965f8cf-449e-4d30-add2-efc666b89ea3","Type":"ContainerStarted","Data":"029b1d15919b0a69cbf427465410ad2d871d53138c939e2aa728ab3eebc6b090"} Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.744024 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8qkw" event={"ID":"08106021-9f77-4a54-8ec2-de2bfe4db63c","Type":"ContainerStarted","Data":"ee7d7a09109fb0078469e99f107e96f463af960c2f77d5d8d12f03d30094e606"} Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.932282 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.933914 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.939181 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 05 20:27:17 crc kubenswrapper[4754]: I0105 20:27:17.946495 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.079390 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlsc\" (UniqueName: \"kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.079441 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.079704 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.109277 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.109339 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.181461 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.181585 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlsc\" (UniqueName: \"kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.181616 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.187229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.205777 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.205966 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlsc\" (UniqueName: \"kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc\") pod \"mysqld-exporter-0\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.256106 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.669516 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lq6gx"] Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.672499 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.678142 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.689581 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lq6gx"] Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.775713 4754 generic.go:334] "Generic (PLEG): container finished" podID="1965f8cf-449e-4d30-add2-efc666b89ea3" containerID="a87dc4f9e8f81b266f0482049a791b59502669a8ce1385fb1716e709e1265e7f" exitCode=0 Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.775988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-5c9s9" event={"ID":"1965f8cf-449e-4d30-add2-efc666b89ea3","Type":"ContainerDied","Data":"a87dc4f9e8f81b266f0482049a791b59502669a8ce1385fb1716e709e1265e7f"} Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.779914 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"25681b94680be89c735c307ccdb78b3675c3ffa6070be97af0cc3d8975e49e62"} Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.779962 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"2a82e5a4abbf914a94374218c003c75a2298f05c517f13bd92a60fd74d9300df"} Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.779973 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"3c59f9f14ee891039f73a101cc6e76cdcf7ba422a78fff88dff967fc64bf9032"} Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.814420 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.814478 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7k2f\" (UniqueName: \"kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.819421 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.847419 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.850022 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.916953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.917044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7k2f\" (UniqueName: \"kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.924599 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:18 crc kubenswrapper[4754]: I0105 20:27:18.937251 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7k2f\" (UniqueName: \"kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f\") pod \"root-account-create-update-lq6gx\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:19 crc kubenswrapper[4754]: I0105 20:27:19.003355 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:19 crc kubenswrapper[4754]: W0105 20:27:19.605040 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fe79418_5702_4809_a084_4a14fa936263.slice/crio-c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456 WatchSource:0}: Error finding container c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456: Status 404 returned error can't find the container with id c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456 Jan 05 20:27:19 crc kubenswrapper[4754]: I0105 20:27:19.619428 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lq6gx"] Jan 05 20:27:19 crc kubenswrapper[4754]: I0105 20:27:19.791566 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lq6gx" event={"ID":"6fe79418-5702-4809-a084-4a14fa936263","Type":"ContainerStarted","Data":"c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456"} Jan 05 20:27:19 crc kubenswrapper[4754]: I0105 20:27:19.793901 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3eb89248-973b-47ce-b504-b9523a605a4a","Type":"ContainerStarted","Data":"f6d9d5aebb205e1eee0d1bde7cd84984ac45d958032888a1f89f8d055de56233"} Jan 05 20:27:19 crc kubenswrapper[4754]: I0105 20:27:19.795249 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.358582 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.455452 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.455612 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.455714 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj7jr\" (UniqueName: \"kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.455799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.455833 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.456036 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run\") pod \"1965f8cf-449e-4d30-add2-efc666b89ea3\" (UID: \"1965f8cf-449e-4d30-add2-efc666b89ea3\") " Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.456592 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run" (OuterVolumeSpecName: "var-run") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.457978 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts" (OuterVolumeSpecName: "scripts") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.458020 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.458886 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.459488 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.464434 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr" (OuterVolumeSpecName: "kube-api-access-qj7jr") pod "1965f8cf-449e-4d30-add2-efc666b89ea3" (UID: "1965f8cf-449e-4d30-add2-efc666b89ea3"). InnerVolumeSpecName "kube-api-access-qj7jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559098 4754 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559140 4754 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559159 4754 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559170 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1965f8cf-449e-4d30-add2-efc666b89ea3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559181 4754 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1965f8cf-449e-4d30-add2-efc666b89ea3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.559192 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj7jr\" (UniqueName: \"kubernetes.io/projected/1965f8cf-449e-4d30-add2-efc666b89ea3-kube-api-access-qj7jr\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.655660 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.809261 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"cf51e635d7fb1a91a0c9f5463b42368183976757c9154e2836a9fb88b19a8990"} Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.815200 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-5c9s9" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.815499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-5c9s9" event={"ID":"1965f8cf-449e-4d30-add2-efc666b89ea3","Type":"ContainerDied","Data":"029b1d15919b0a69cbf427465410ad2d871d53138c939e2aa728ab3eebc6b090"} Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.815534 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029b1d15919b0a69cbf427465410ad2d871d53138c939e2aa728ab3eebc6b090" Jan 05 20:27:20 crc kubenswrapper[4754]: I0105 20:27:20.820734 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lq6gx" event={"ID":"6fe79418-5702-4809-a084-4a14fa936263","Type":"ContainerStarted","Data":"f04f8ec99d2d2635a27636120f8e453b6903b58832062427097677ac517d37df"} Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.467990 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pbz4n-config-5c9s9"] Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.472403 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pbz4n" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.487478 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pbz4n-config-5c9s9"] Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.546849 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pbz4n-config-k2bbb"] Jan 05 20:27:21 crc kubenswrapper[4754]: E0105 20:27:21.547334 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1965f8cf-449e-4d30-add2-efc666b89ea3" containerName="ovn-config" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.547350 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1965f8cf-449e-4d30-add2-efc666b89ea3" containerName="ovn-config" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.547534 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1965f8cf-449e-4d30-add2-efc666b89ea3" containerName="ovn-config" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.548233 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.553700 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.562190 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n-config-k2bbb"] Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.604640 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.604910 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.605102 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.605176 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct9k\" (UniqueName: \"kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.605574 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.605627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.621386 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1965f8cf-449e-4d30-add2-efc666b89ea3" path="/var/lib/kubelet/pods/1965f8cf-449e-4d30-add2-efc666b89ea3/volumes" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.709236 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.709899 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.710367 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.710392 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.710959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.711396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.711463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ct9k\" (UniqueName: \"kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.714549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.712066 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.714651 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.714770 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.735132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ct9k\" (UniqueName: \"kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k\") pod \"ovn-controller-pbz4n-config-k2bbb\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.832051 4754 generic.go:334] "Generic (PLEG): container finished" podID="6fe79418-5702-4809-a084-4a14fa936263" containerID="f04f8ec99d2d2635a27636120f8e453b6903b58832062427097677ac517d37df" exitCode=0 Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.832142 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lq6gx" event={"ID":"6fe79418-5702-4809-a084-4a14fa936263","Type":"ContainerDied","Data":"f04f8ec99d2d2635a27636120f8e453b6903b58832062427097677ac517d37df"} Jan 05 20:27:21 crc kubenswrapper[4754]: I0105 20:27:21.880756 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.535767 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.536553 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="prometheus" containerID="cri-o://0dd28c2022baee6376a36bbbc781b1f1ae63b45c6a605c28b01f3a4d6112a539" gracePeriod=600 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.536728 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="thanos-sidecar" containerID="cri-o://a0f13466d14351031aa18b6e6fc8c4a409d534f5e6bb85da22136fe8cf61e29f" gracePeriod=600 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.536763 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="config-reloader" containerID="cri-o://77d9d0f14982754b30237f7f0277da2cd7bc39cd761d56e226f5475e5a348508" gracePeriod=600 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.721950 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pbz4n-config-k2bbb"] Jan 05 20:27:22 crc kubenswrapper[4754]: W0105 20:27:22.766276 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5453422_c4c9_42e3_8355_03edc4a6dd98.slice/crio-c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189 WatchSource:0}: Error finding container c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189: Status 404 returned error can't find the container with id c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.883036 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-k2bbb" event={"ID":"f5453422-c4c9-42e3-8355-03edc4a6dd98","Type":"ContainerStarted","Data":"c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.890540 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3eb89248-973b-47ce-b504-b9523a605a4a","Type":"ContainerStarted","Data":"91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893502 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerID="a0f13466d14351031aa18b6e6fc8c4a409d534f5e6bb85da22136fe8cf61e29f" exitCode=0 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893525 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerID="77d9d0f14982754b30237f7f0277da2cd7bc39cd761d56e226f5475e5a348508" exitCode=0 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893534 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerID="0dd28c2022baee6376a36bbbc781b1f1ae63b45c6a605c28b01f3a4d6112a539" exitCode=0 Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893595 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerDied","Data":"a0f13466d14351031aa18b6e6fc8c4a409d534f5e6bb85da22136fe8cf61e29f"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893615 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerDied","Data":"77d9d0f14982754b30237f7f0277da2cd7bc39cd761d56e226f5475e5a348508"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.893652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerDied","Data":"0dd28c2022baee6376a36bbbc781b1f1ae63b45c6a605c28b01f3a4d6112a539"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.896247 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"f0e69c4adba0937251d9df560221040cae5b98c8025a5985f71e5aaf48dea2c8"} Jan 05 20:27:22 crc kubenswrapper[4754]: I0105 20:27:22.912817 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.369710708 podStartE2EDuration="5.912796489s" podCreationTimestamp="2026-01-05 20:27:17 +0000 UTC" firstStartedPulling="2026-01-05 20:27:18.822270097 +0000 UTC m=+1325.531453971" lastFinishedPulling="2026-01-05 20:27:22.365355878 +0000 UTC m=+1329.074539752" observedRunningTime="2026-01-05 20:27:22.903589528 +0000 UTC m=+1329.612773402" watchObservedRunningTime="2026-01-05 20:27:22.912796489 +0000 UTC m=+1329.621980363" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.382534 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.466107 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts\") pod \"6fe79418-5702-4809-a084-4a14fa936263\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.466333 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7k2f\" (UniqueName: \"kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f\") pod \"6fe79418-5702-4809-a084-4a14fa936263\" (UID: \"6fe79418-5702-4809-a084-4a14fa936263\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.468706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fe79418-5702-4809-a084-4a14fa936263" (UID: "6fe79418-5702-4809-a084-4a14fa936263"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.469167 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe79418-5702-4809-a084-4a14fa936263-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.475632 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f" (OuterVolumeSpecName: "kube-api-access-n7k2f") pod "6fe79418-5702-4809-a084-4a14fa936263" (UID: "6fe79418-5702-4809-a084-4a14fa936263"). InnerVolumeSpecName "kube-api-access-n7k2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.557251 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.570860 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7k2f\" (UniqueName: \"kubernetes.io/projected/6fe79418-5702-4809-a084-4a14fa936263-kube-api-access-n7k2f\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.673903 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674053 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674090 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674124 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tv69\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674159 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674215 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.674269 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.676098 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.678424 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.678469 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.678511 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.678533 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0\") pod \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\" (UID: \"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c\") " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.679194 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.681063 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.681097 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.681115 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.689646 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out" (OuterVolumeSpecName: "config-out") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.689748 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config" (OuterVolumeSpecName: "config") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.689802 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.689949 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69" (OuterVolumeSpecName: "kube-api-access-7tv69") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "kube-api-access-7tv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.700798 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.716059 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config" (OuterVolumeSpecName: "web-config") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.741735 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" (UID: "f7bfbab4-49c1-450e-a0e7-0f90a5fd342c"). InnerVolumeSpecName "pvc-d7044f3d-5c83-4c10-a579-3b80348451fd". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.782858 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783216 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") on node \"crc\" " Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783232 4754 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-web-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783244 4754 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-config-out\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783252 4754 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783262 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tv69\" (UniqueName: \"kubernetes.io/projected/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-kube-api-access-7tv69\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.783273 4754 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.820482 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.820767 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d7044f3d-5c83-4c10-a579-3b80348451fd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd") on node "crc" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.884695 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.919959 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f7bfbab4-49c1-450e-a0e7-0f90a5fd342c","Type":"ContainerDied","Data":"6d29cbfa9e69b8d3bd2ffcddf884ba437fa29214d4d5a522b5fb902771930e8b"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.920050 4754 scope.go:117] "RemoveContainer" containerID="a0f13466d14351031aa18b6e6fc8c4a409d534f5e6bb85da22136fe8cf61e29f" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.920065 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.934053 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lq6gx" event={"ID":"6fe79418-5702-4809-a084-4a14fa936263","Type":"ContainerDied","Data":"c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.934076 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lq6gx" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.934095 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74ae9f35946c0354d259b8609426b2feca907adf91c8f3914ac03361269a456" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.940127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"ec9d4bf65b96892393593f9ee6aa9fd645edfdc991a2adef2dca5143099f31da"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.940164 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"f867c79764001da6a0ef13ff1990f0050fd5c74045101f4781540fbea866d699"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.940175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"b6f5e832a4a3949a592670cb90f8da8711fda336abe0378e2d7aaa06498d0c96"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.944548 4754 generic.go:334] "Generic (PLEG): container finished" podID="f5453422-c4c9-42e3-8355-03edc4a6dd98" containerID="659c676cb6539d7b1f9c0bd1508b3423e26ff4a1e9b2746c68673bbf2d09a935" exitCode=0 Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.944597 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-k2bbb" event={"ID":"f5453422-c4c9-42e3-8355-03edc4a6dd98","Type":"ContainerDied","Data":"659c676cb6539d7b1f9c0bd1508b3423e26ff4a1e9b2746c68673bbf2d09a935"} Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.960407 4754 scope.go:117] "RemoveContainer" containerID="77d9d0f14982754b30237f7f0277da2cd7bc39cd761d56e226f5475e5a348508" Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.969202 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.978937 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:23 crc kubenswrapper[4754]: I0105 20:27:23.997392 4754 scope.go:117] "RemoveContainer" containerID="0dd28c2022baee6376a36bbbc781b1f1ae63b45c6a605c28b01f3a4d6112a539" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.008790 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.009419 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe79418-5702-4809-a084-4a14fa936263" containerName="mariadb-account-create-update" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009439 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe79418-5702-4809-a084-4a14fa936263" containerName="mariadb-account-create-update" Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.009455 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="config-reloader" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009462 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="config-reloader" Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.009479 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="prometheus" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009485 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="prometheus" Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.009500 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="thanos-sidecar" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009506 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="thanos-sidecar" Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.009516 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="init-config-reloader" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009523 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="init-config-reloader" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009734 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="prometheus" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009746 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="config-reloader" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009753 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" containerName="thanos-sidecar" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.009768 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe79418-5702-4809-a084-4a14fa936263" containerName="mariadb-account-create-update" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.012393 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.017241 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.017608 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.017803 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.017952 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.018104 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-w785j" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.021178 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.021386 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.022974 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.027332 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.031418 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.046169 4754 scope.go:117] "RemoveContainer" containerID="9290eed6896593eb91d3c212bbfc132ea8a9a99563c4a1ef07f8f1cfa9f58266" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088435 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088518 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95kdz\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-kube-api-access-95kdz\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088605 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/652366fc-9032-455e-9e13-b71fd3ff76e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088914 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.088969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.089006 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.089176 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.089307 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.089444 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: E0105 20:27:24.180178 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7bfbab4_49c1_450e_a0e7_0f90a5fd342c.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191637 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191713 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191769 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191832 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191913 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95kdz\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-kube-api-access-95kdz\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/652366fc-9032-455e-9e13-b71fd3ff76e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.191999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.192038 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.192062 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.192093 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.192125 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.192187 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.193603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.195129 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.195877 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/652366fc-9032-455e-9e13-b71fd3ff76e3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.198990 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.199235 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/22e5596610b592b729e0b507ea995333fbeb2dec5ee98d2efedb79ca7d9a4cc2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.201861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.202713 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.204376 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.209230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.209445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.210131 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/652366fc-9032-455e-9e13-b71fd3ff76e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.211382 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.211756 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95kdz\" (UniqueName: \"kubernetes.io/projected/652366fc-9032-455e-9e13-b71fd3ff76e3-kube-api-access-95kdz\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.211998 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/652366fc-9032-455e-9e13-b71fd3ff76e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.249694 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7044f3d-5c83-4c10-a579-3b80348451fd\") pod \"prometheus-metric-storage-0\" (UID: \"652366fc-9032-455e-9e13-b71fd3ff76e3\") " pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.346510 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.869608 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 05 20:27:24 crc kubenswrapper[4754]: W0105 20:27:24.884028 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652366fc_9032_455e_9e13_b71fd3ff76e3.slice/crio-d2bb872cf649e664b98d093871f03d3422c2bc30312911d1a4e3d9d54f2d9c0f WatchSource:0}: Error finding container d2bb872cf649e664b98d093871f03d3422c2bc30312911d1a4e3d9d54f2d9c0f: Status 404 returned error can't find the container with id d2bb872cf649e664b98d093871f03d3422c2bc30312911d1a4e3d9d54f2d9c0f Jan 05 20:27:24 crc kubenswrapper[4754]: I0105 20:27:24.978349 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerStarted","Data":"d2bb872cf649e664b98d093871f03d3422c2bc30312911d1a4e3d9d54f2d9c0f"} Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.320367 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341102 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341254 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341516 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ct9k\" (UniqueName: \"kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341632 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341686 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.341718 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn\") pod \"f5453422-c4c9-42e3-8355-03edc4a6dd98\" (UID: \"f5453422-c4c9-42e3-8355-03edc4a6dd98\") " Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.342206 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.342237 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.342252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run" (OuterVolumeSpecName: "var-run") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.344013 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts" (OuterVolumeSpecName: "scripts") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.344635 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.351948 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k" (OuterVolumeSpecName: "kube-api-access-8ct9k") pod "f5453422-c4c9-42e3-8355-03edc4a6dd98" (UID: "f5453422-c4c9-42e3-8355-03edc4a6dd98"). InnerVolumeSpecName "kube-api-access-8ct9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443183 4754 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443223 4754 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443233 4754 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5453422-c4c9-42e3-8355-03edc4a6dd98-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443242 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ct9k\" (UniqueName: \"kubernetes.io/projected/f5453422-c4c9-42e3-8355-03edc4a6dd98-kube-api-access-8ct9k\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443251 4754 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.443259 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5453422-c4c9-42e3-8355-03edc4a6dd98-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.601406 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7bfbab4-49c1-450e-a0e7-0f90a5fd342c" path="/var/lib/kubelet/pods/f7bfbab4-49c1-450e-a0e7-0f90a5fd342c/volumes" Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.996811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"3a838c7592e6ae930ddb9af2ec23829c82edba5b9b923013cd56c76ba6220f02"} Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.996880 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"60a5c3c33036abe4253d17741c9c447e5a9aa46e0c32322dede7abd8ed56c9e7"} Jan 05 20:27:25 crc kubenswrapper[4754]: I0105 20:27:25.996892 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"d7b0d7969c34d31918d4f536a17596f84d9ae1c7fb9ea66662cbdafc3c039f84"} Jan 05 20:27:26 crc kubenswrapper[4754]: I0105 20:27:26.003170 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pbz4n-config-k2bbb" event={"ID":"f5453422-c4c9-42e3-8355-03edc4a6dd98","Type":"ContainerDied","Data":"c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189"} Jan 05 20:27:26 crc kubenswrapper[4754]: I0105 20:27:26.003557 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4154c6c11279f9cd6148e396c1aaad3e0c1b1b4bbd869993ed493e427072189" Jan 05 20:27:26 crc kubenswrapper[4754]: I0105 20:27:26.003224 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pbz4n-config-k2bbb" Jan 05 20:27:26 crc kubenswrapper[4754]: I0105 20:27:26.392754 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pbz4n-config-k2bbb"] Jan 05 20:27:26 crc kubenswrapper[4754]: I0105 20:27:26.403894 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pbz4n-config-k2bbb"] Jan 05 20:27:27 crc kubenswrapper[4754]: I0105 20:27:27.517429 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 20:27:27 crc kubenswrapper[4754]: I0105 20:27:27.534557 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 05 20:27:27 crc kubenswrapper[4754]: I0105 20:27:27.543476 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 05 20:27:27 crc kubenswrapper[4754]: I0105 20:27:27.626512 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5453422-c4c9-42e3-8355-03edc4a6dd98" path="/var/lib/kubelet/pods/f5453422-c4c9-42e3-8355-03edc4a6dd98/volumes" Jan 05 20:27:28 crc kubenswrapper[4754]: I0105 20:27:28.025591 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerStarted","Data":"046bea54bae6bf1f3b51b91e18113cd9a190fa74ded79b5a02af1514fb4bcd17"} Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.770156 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-whhvh"] Jan 05 20:27:29 crc kubenswrapper[4754]: E0105 20:27:29.772412 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5453422-c4c9-42e3-8355-03edc4a6dd98" containerName="ovn-config" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.772547 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5453422-c4c9-42e3-8355-03edc4a6dd98" containerName="ovn-config" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.772843 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5453422-c4c9-42e3-8355-03edc4a6dd98" containerName="ovn-config" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.773826 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.785373 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4530-account-create-update-6t94n"] Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.787271 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.790261 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.803561 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-whhvh"] Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.829509 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4530-account-create-update-6t94n"] Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.866695 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.866747 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjf76\" (UniqueName: \"kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.866802 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.866839 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txp5m\" (UniqueName: \"kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.921359 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-225d-account-create-update-9f5mg"] Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.922978 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.931697 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.975763 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.976158 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.976284 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjf76\" (UniqueName: \"kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.976479 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.976630 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txp5m\" (UniqueName: \"kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.976809 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x869k\" (UniqueName: \"kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.978500 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.978932 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-225d-account-create-update-9f5mg"] Jan 05 20:27:29 crc kubenswrapper[4754]: I0105 20:27:29.982722 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.014356 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zhqf4"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.016352 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjf76\" (UniqueName: \"kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76\") pod \"barbican-4530-account-create-update-6t94n\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.016808 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.020231 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.020423 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkn46" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.020534 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.020723 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.022096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txp5m\" (UniqueName: \"kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m\") pod \"cinder-db-create-whhvh\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.048044 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zhqf4"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.057743 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gx6mr"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.059654 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.079562 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.079609 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.079662 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4p8l\" (UniqueName: \"kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.079786 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x869k\" (UniqueName: \"kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.079836 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.080498 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.091630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-81e1-account-create-update-dqq7l"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.093372 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.095430 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.097072 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.100090 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x869k\" (UniqueName: \"kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k\") pod \"cinder-225d-account-create-update-9f5mg\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.102818 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gx6mr"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.105203 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.115924 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-81e1-account-create-update-dqq7l"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.181836 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182281 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182324 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182364 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmv9h\" (UniqueName: \"kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh99m\" (UniqueName: \"kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.182409 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4p8l\" (UniqueName: \"kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.186553 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.188837 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.191032 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-6nvbl"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.192641 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.197893 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4p8l\" (UniqueName: \"kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l\") pod \"keystone-db-sync-zhqf4\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.207806 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6nvbl"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.284963 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.285008 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.285069 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.285131 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gps79\" (UniqueName: \"kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.285277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmv9h\" (UniqueName: \"kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.285319 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh99m\" (UniqueName: \"kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.286035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.286717 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.292936 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.300251 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmv9h\" (UniqueName: \"kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h\") pod \"heat-81e1-account-create-update-dqq7l\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.300471 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh99m\" (UniqueName: \"kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m\") pod \"barbican-db-create-gx6mr\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.358124 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pbdm4"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.370344 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.389753 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gps79\" (UniqueName: \"kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.390114 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.391246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.399558 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59a7-account-create-update-t9cmc"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.401546 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.407762 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.411106 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59a7-account-create-update-t9cmc"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.424815 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pbdm4"] Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.425744 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gps79\" (UniqueName: \"kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79\") pod \"heat-db-create-6nvbl\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.460965 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.478693 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.485493 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.491683 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5q9\" (UniqueName: \"kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.491818 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdc5\" (UniqueName: \"kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.492011 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.492133 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.588992 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.597939 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5q9\" (UniqueName: \"kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.598112 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdc5\" (UniqueName: \"kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.598202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.598284 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.599027 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.600466 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.620052 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5q9\" (UniqueName: \"kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9\") pod \"neutron-59a7-account-create-update-t9cmc\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.620744 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdc5\" (UniqueName: \"kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5\") pod \"neutron-db-create-pbdm4\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.692928 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:30 crc kubenswrapper[4754]: I0105 20:27:30.734532 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.325744 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59a7-account-create-update-t9cmc"] Jan 05 20:27:34 crc kubenswrapper[4754]: W0105 20:27:34.330672 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod656e1694_0fbd_4f0a_b1cd_50594514afd3.slice/crio-ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af WatchSource:0}: Error finding container ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af: Status 404 returned error can't find the container with id ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.524319 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-whhvh"] Jan 05 20:27:34 crc kubenswrapper[4754]: W0105 20:27:34.526947 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac29b87_268c_4066_a428_db4d5b5b595f.slice/crio-e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95 WatchSource:0}: Error finding container e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95: Status 404 returned error can't find the container with id e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95 Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.536864 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6nvbl"] Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.545192 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pbdm4"] Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.736903 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-81e1-account-create-update-dqq7l"] Jan 05 20:27:34 crc kubenswrapper[4754]: W0105 20:27:34.920540 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1c1900_32c4_4245_ae03_f71435b2259f.slice/crio-28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99 WatchSource:0}: Error finding container 28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99: Status 404 returned error can't find the container with id 28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99 Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.923006 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gx6mr"] Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.938717 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4530-account-create-update-6t94n"] Jan 05 20:27:34 crc kubenswrapper[4754]: I0105 20:27:34.952190 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zhqf4"] Jan 05 20:27:34 crc kubenswrapper[4754]: W0105 20:27:34.971410 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod495f6c79_65c2_4b10_8695_1d11ee63fa93.slice/crio-f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d WatchSource:0}: Error finding container f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d: Status 404 returned error can't find the container with id f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.027029 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-225d-account-create-update-9f5mg"] Jan 05 20:27:35 crc kubenswrapper[4754]: W0105 20:27:35.051805 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe1d6939_e90a_46a4_87f0_05abc48224b9.slice/crio-b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8 WatchSource:0}: Error finding container b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8: Status 404 returned error can't find the container with id b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8 Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.116516 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6nvbl" event={"ID":"1ac29b87-268c-4066-a428-db4d5b5b595f","Type":"ContainerStarted","Data":"e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.118344 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zhqf4" event={"ID":"687ba708-6c04-4435-9acd-76dfdb4311e2","Type":"ContainerStarted","Data":"93140c66ccd5f0f34e8248a3ea52bb36405a0ab1b59437a9f9d5db6c5dc86870"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.119812 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-81e1-account-create-update-dqq7l" event={"ID":"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6","Type":"ContainerStarted","Data":"db576639b91ca170483e0727411b00ac21bf685ad520cde9ed483a4eb3d4ddb5"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.121153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4530-account-create-update-6t94n" event={"ID":"495f6c79-65c2-4b10-8695-1d11ee63fa93","Type":"ContainerStarted","Data":"f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.122370 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-225d-account-create-update-9f5mg" event={"ID":"be1d6939-e90a-46a4-87f0-05abc48224b9","Type":"ContainerStarted","Data":"b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.123731 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gx6mr" event={"ID":"4a1c1900-32c4-4245-ae03-f71435b2259f","Type":"ContainerStarted","Data":"28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.124878 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pbdm4" event={"ID":"85825127-1783-4a9b-8205-60891099a53e","Type":"ContainerStarted","Data":"f7995519822e4e1c91d86eedee8a6d665043df46d38bd30890a39e6f07466a8b"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.128871 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59a7-account-create-update-t9cmc" event={"ID":"656e1694-0fbd-4f0a-b1cd-50594514afd3","Type":"ContainerStarted","Data":"ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af"} Jan 05 20:27:35 crc kubenswrapper[4754]: I0105 20:27:35.130871 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-whhvh" event={"ID":"5e6372e5-4136-46a8-aee5-181b6277ad6b","Type":"ContainerStarted","Data":"bce5e874ea63bd9592cdeff406b6c7a30f79270ac631af8150ed0537e99c0eb9"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.167039 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-whhvh" event={"ID":"5e6372e5-4136-46a8-aee5-181b6277ad6b","Type":"ContainerStarted","Data":"f2481a7e32308a3560cf4b6ed88971b004cd7c6ff276c9faf771528546734f5b"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.168895 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6nvbl" event={"ID":"1ac29b87-268c-4066-a428-db4d5b5b595f","Type":"ContainerStarted","Data":"d65acbcb780b9254a8dfc2d6215f478c822304f1158b4c400b1e0dfa3c9a76a5"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.174264 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"0a92ac0e37e1d38d9454f30d1cd76e28b45edbbd756a706d11b40e1ddc4db1f1"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.176681 4754 generic.go:334] "Generic (PLEG): container finished" podID="652366fc-9032-455e-9e13-b71fd3ff76e3" containerID="046bea54bae6bf1f3b51b91e18113cd9a190fa74ded79b5a02af1514fb4bcd17" exitCode=0 Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.176754 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerDied","Data":"046bea54bae6bf1f3b51b91e18113cd9a190fa74ded79b5a02af1514fb4bcd17"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.178541 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4530-account-create-update-6t94n" event={"ID":"495f6c79-65c2-4b10-8695-1d11ee63fa93","Type":"ContainerStarted","Data":"7f57e8c92d7d6c226361a6311cda27e9b68bdcf3938a1c9ade5a48ed14163f6f"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.180120 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gx6mr" event={"ID":"4a1c1900-32c4-4245-ae03-f71435b2259f","Type":"ContainerStarted","Data":"503dc31b309feaca547af6cbe95cae3c765190807d3f146653e482cdb320d7f4"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.183451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pbdm4" event={"ID":"85825127-1783-4a9b-8205-60891099a53e","Type":"ContainerStarted","Data":"88402220f780722e07e98d69e4607de220f89b445d1a1b47e4f5916d1aa48b4c"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.185326 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-81e1-account-create-update-dqq7l" event={"ID":"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6","Type":"ContainerStarted","Data":"bddede9335a3863f5f6f3f1baa10d638ba1e78a009c4ea03c4d0fb2092b5902e"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.186966 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8qkw" event={"ID":"08106021-9f77-4a54-8ec2-de2bfe4db63c","Type":"ContainerStarted","Data":"52fe1a81686ee298e7aaaff20bb3e70e2d827582442d4c382c1a2f6d57b860a0"} Jan 05 20:27:38 crc kubenswrapper[4754]: I0105 20:27:38.189151 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59a7-account-create-update-t9cmc" event={"ID":"656e1694-0fbd-4f0a-b1cd-50594514afd3","Type":"ContainerStarted","Data":"83f01fecb7e758b786af5a7077862210a02323ef61016ea06e2b3653a65b02d4"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.198981 4754 generic.go:334] "Generic (PLEG): container finished" podID="85825127-1783-4a9b-8205-60891099a53e" containerID="88402220f780722e07e98d69e4607de220f89b445d1a1b47e4f5916d1aa48b4c" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.199559 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pbdm4" event={"ID":"85825127-1783-4a9b-8205-60891099a53e","Type":"ContainerDied","Data":"88402220f780722e07e98d69e4607de220f89b445d1a1b47e4f5916d1aa48b4c"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.202206 4754 generic.go:334] "Generic (PLEG): container finished" podID="6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" containerID="bddede9335a3863f5f6f3f1baa10d638ba1e78a009c4ea03c4d0fb2092b5902e" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.202272 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-81e1-account-create-update-dqq7l" event={"ID":"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6","Type":"ContainerDied","Data":"bddede9335a3863f5f6f3f1baa10d638ba1e78a009c4ea03c4d0fb2092b5902e"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.206735 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerStarted","Data":"591c3512b6132e09f32ebb00ecd922d2a541ceb94a16e87c4bea761859e23419"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.214665 4754 generic.go:334] "Generic (PLEG): container finished" podID="495f6c79-65c2-4b10-8695-1d11ee63fa93" containerID="7f57e8c92d7d6c226361a6311cda27e9b68bdcf3938a1c9ade5a48ed14163f6f" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.214719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4530-account-create-update-6t94n" event={"ID":"495f6c79-65c2-4b10-8695-1d11ee63fa93","Type":"ContainerDied","Data":"7f57e8c92d7d6c226361a6311cda27e9b68bdcf3938a1c9ade5a48ed14163f6f"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.222810 4754 generic.go:334] "Generic (PLEG): container finished" podID="be1d6939-e90a-46a4-87f0-05abc48224b9" containerID="99d91e63a5ebad8b3a8214a7598cf8ef71778f0098ca29e1527ef777cd1af988" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.222924 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-225d-account-create-update-9f5mg" event={"ID":"be1d6939-e90a-46a4-87f0-05abc48224b9","Type":"ContainerDied","Data":"99d91e63a5ebad8b3a8214a7598cf8ef71778f0098ca29e1527ef777cd1af988"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.228020 4754 generic.go:334] "Generic (PLEG): container finished" podID="4a1c1900-32c4-4245-ae03-f71435b2259f" containerID="503dc31b309feaca547af6cbe95cae3c765190807d3f146653e482cdb320d7f4" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.228082 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gx6mr" event={"ID":"4a1c1900-32c4-4245-ae03-f71435b2259f","Type":"ContainerDied","Data":"503dc31b309feaca547af6cbe95cae3c765190807d3f146653e482cdb320d7f4"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.230556 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ac29b87-268c-4066-a428-db4d5b5b595f" containerID="d65acbcb780b9254a8dfc2d6215f478c822304f1158b4c400b1e0dfa3c9a76a5" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.230660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6nvbl" event={"ID":"1ac29b87-268c-4066-a428-db4d5b5b595f","Type":"ContainerDied","Data":"d65acbcb780b9254a8dfc2d6215f478c822304f1158b4c400b1e0dfa3c9a76a5"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.232441 4754 generic.go:334] "Generic (PLEG): container finished" podID="656e1694-0fbd-4f0a-b1cd-50594514afd3" containerID="83f01fecb7e758b786af5a7077862210a02323ef61016ea06e2b3653a65b02d4" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.232507 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59a7-account-create-update-t9cmc" event={"ID":"656e1694-0fbd-4f0a-b1cd-50594514afd3","Type":"ContainerDied","Data":"83f01fecb7e758b786af5a7077862210a02323ef61016ea06e2b3653a65b02d4"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.243073 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"fab41e56014c47ed1578e7130c39b4ab14e8edbd331bf2a4c19f4bcdc8408686"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.246782 4754 generic.go:334] "Generic (PLEG): container finished" podID="5e6372e5-4136-46a8-aee5-181b6277ad6b" containerID="f2481a7e32308a3560cf4b6ed88971b004cd7c6ff276c9faf771528546734f5b" exitCode=0 Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.248267 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-whhvh" event={"ID":"5e6372e5-4136-46a8-aee5-181b6277ad6b","Type":"ContainerDied","Data":"f2481a7e32308a3560cf4b6ed88971b004cd7c6ff276c9faf771528546734f5b"} Jan 05 20:27:39 crc kubenswrapper[4754]: I0105 20:27:39.308126 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-j8qkw" podStartSLOduration=6.420850751 podStartE2EDuration="23.3081067s" podCreationTimestamp="2026-01-05 20:27:16 +0000 UTC" firstStartedPulling="2026-01-05 20:27:17.023062936 +0000 UTC m=+1323.732246800" lastFinishedPulling="2026-01-05 20:27:33.910318875 +0000 UTC m=+1340.619502749" observedRunningTime="2026-01-05 20:27:39.298740135 +0000 UTC m=+1346.007924019" watchObservedRunningTime="2026-01-05 20:27:39.3081067 +0000 UTC m=+1346.017290574" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.268434 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"a49e6f5b8ff02756a84433718f001099b267c01dcd51556879e0730ee6b5b57d"} Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.268816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"64fe8f1d-0c69-4fc8-aac8-c17660c2fed5","Type":"ContainerStarted","Data":"557b72d39051f177dd3eb6d80c399d731e0bcbf26390fc7b080ff43de0f30b8b"} Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.331008 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=49.718292663 podStartE2EDuration="58.330990068s" podCreationTimestamp="2026-01-05 20:26:42 +0000 UTC" firstStartedPulling="2026-01-05 20:27:16.25653989 +0000 UTC m=+1322.965723764" lastFinishedPulling="2026-01-05 20:27:24.869237295 +0000 UTC m=+1331.578421169" observedRunningTime="2026-01-05 20:27:40.319482147 +0000 UTC m=+1347.028666031" watchObservedRunningTime="2026-01-05 20:27:40.330990068 +0000 UTC m=+1347.040173942" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.623352 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.625893 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.634505 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.638023 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770408 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvln4\" (UniqueName: \"kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770500 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770576 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770621 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770677 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.770708 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvln4\" (UniqueName: \"kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873436 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873583 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.873611 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.875032 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.875035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.875866 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.876353 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.876279 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.906932 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvln4\" (UniqueName: \"kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4\") pod \"dnsmasq-dns-6d5b6d6b67-gjlm4\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:40 crc kubenswrapper[4754]: I0105 20:27:40.981096 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.041631 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.096789 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.110017 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.114780 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.128177 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts\") pod \"4a1c1900-32c4-4245-ae03-f71435b2259f\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.129208 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a1c1900-32c4-4245-ae03-f71435b2259f" (UID: "4a1c1900-32c4-4245-ae03-f71435b2259f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.146185 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.234556 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gps79\" (UniqueName: \"kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79\") pod \"1ac29b87-268c-4066-a428-db4d5b5b595f\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.234854 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdc5\" (UniqueName: \"kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5\") pod \"85825127-1783-4a9b-8205-60891099a53e\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.234933 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjf76\" (UniqueName: \"kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76\") pod \"495f6c79-65c2-4b10-8695-1d11ee63fa93\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.234978 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh99m\" (UniqueName: \"kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m\") pod \"4a1c1900-32c4-4245-ae03-f71435b2259f\" (UID: \"4a1c1900-32c4-4245-ae03-f71435b2259f\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.235024 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts\") pod \"85825127-1783-4a9b-8205-60891099a53e\" (UID: \"85825127-1783-4a9b-8205-60891099a53e\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.235124 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts\") pod \"1ac29b87-268c-4066-a428-db4d5b5b595f\" (UID: \"1ac29b87-268c-4066-a428-db4d5b5b595f\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.235142 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts\") pod \"495f6c79-65c2-4b10-8695-1d11ee63fa93\" (UID: \"495f6c79-65c2-4b10-8695-1d11ee63fa93\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.235710 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1c1900-32c4-4245-ae03-f71435b2259f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.236333 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "495f6c79-65c2-4b10-8695-1d11ee63fa93" (UID: "495f6c79-65c2-4b10-8695-1d11ee63fa93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.237118 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85825127-1783-4a9b-8205-60891099a53e" (UID: "85825127-1783-4a9b-8205-60891099a53e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.237521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ac29b87-268c-4066-a428-db4d5b5b595f" (UID: "1ac29b87-268c-4066-a428-db4d5b5b595f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.244897 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5" (OuterVolumeSpecName: "kube-api-access-gjdc5") pod "85825127-1783-4a9b-8205-60891099a53e" (UID: "85825127-1783-4a9b-8205-60891099a53e"). InnerVolumeSpecName "kube-api-access-gjdc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.245187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m" (OuterVolumeSpecName: "kube-api-access-rh99m") pod "4a1c1900-32c4-4245-ae03-f71435b2259f" (UID: "4a1c1900-32c4-4245-ae03-f71435b2259f"). InnerVolumeSpecName "kube-api-access-rh99m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.245984 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76" (OuterVolumeSpecName: "kube-api-access-tjf76") pod "495f6c79-65c2-4b10-8695-1d11ee63fa93" (UID: "495f6c79-65c2-4b10-8695-1d11ee63fa93"). InnerVolumeSpecName "kube-api-access-tjf76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.247682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79" (OuterVolumeSpecName: "kube-api-access-gps79") pod "1ac29b87-268c-4066-a428-db4d5b5b595f" (UID: "1ac29b87-268c-4066-a428-db4d5b5b595f"). InnerVolumeSpecName "kube-api-access-gps79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.255219 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.260159 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.289920 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.310237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-81e1-account-create-update-dqq7l" event={"ID":"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6","Type":"ContainerDied","Data":"db576639b91ca170483e0727411b00ac21bf685ad520cde9ed483a4eb3d4ddb5"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.310277 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db576639b91ca170483e0727411b00ac21bf685ad520cde9ed483a4eb3d4ddb5" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.310358 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-81e1-account-create-update-dqq7l" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.313033 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59a7-account-create-update-t9cmc" event={"ID":"656e1694-0fbd-4f0a-b1cd-50594514afd3","Type":"ContainerDied","Data":"ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.313074 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea27d680ced9b48cc05f70815b2c7631fb3aa99f1d0aeca9baf34976ebf811af" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.313155 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59a7-account-create-update-t9cmc" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.316474 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerStarted","Data":"8b15f1ee4e8230ce3975b5f5eaf8c98ca69183e164609519ed0c9afe753a6d17"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.318939 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pbdm4" event={"ID":"85825127-1783-4a9b-8205-60891099a53e","Type":"ContainerDied","Data":"f7995519822e4e1c91d86eedee8a6d665043df46d38bd30890a39e6f07466a8b"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.318981 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7995519822e4e1c91d86eedee8a6d665043df46d38bd30890a39e6f07466a8b" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.318991 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pbdm4" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.325148 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-whhvh" event={"ID":"5e6372e5-4136-46a8-aee5-181b6277ad6b","Type":"ContainerDied","Data":"bce5e874ea63bd9592cdeff406b6c7a30f79270ac631af8150ed0537e99c0eb9"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.325180 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce5e874ea63bd9592cdeff406b6c7a30f79270ac631af8150ed0537e99c0eb9" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.325250 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-whhvh" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.334749 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4530-account-create-update-6t94n" event={"ID":"495f6c79-65c2-4b10-8695-1d11ee63fa93","Type":"ContainerDied","Data":"f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.334791 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5cd1021e9561118a07f8732cd431fba6fb991bc562014929aed57505e88cf1d" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.334851 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4530-account-create-update-6t94n" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.339264 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5q9\" (UniqueName: \"kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9\") pod \"656e1694-0fbd-4f0a-b1cd-50594514afd3\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.341409 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts\") pod \"656e1694-0fbd-4f0a-b1cd-50594514afd3\" (UID: \"656e1694-0fbd-4f0a-b1cd-50594514afd3\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.342206 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-225d-account-create-update-9f5mg" event={"ID":"be1d6939-e90a-46a4-87f0-05abc48224b9","Type":"ContainerDied","Data":"b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.342254 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87a0f04c46ef38eb12f8a918aaf5e9f5547b70b36500f6cddf761d7a95e8ec8" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.342199 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "656e1694-0fbd-4f0a-b1cd-50594514afd3" (UID: "656e1694-0fbd-4f0a-b1cd-50594514afd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.342482 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-225d-account-create-update-9f5mg" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348474 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85825127-1783-4a9b-8205-60891099a53e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348508 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac29b87-268c-4066-a428-db4d5b5b595f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348522 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495f6c79-65c2-4b10-8695-1d11ee63fa93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348543 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e1694-0fbd-4f0a-b1cd-50594514afd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348559 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gps79\" (UniqueName: \"kubernetes.io/projected/1ac29b87-268c-4066-a428-db4d5b5b595f-kube-api-access-gps79\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348575 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdc5\" (UniqueName: \"kubernetes.io/projected/85825127-1783-4a9b-8205-60891099a53e-kube-api-access-gjdc5\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348590 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjf76\" (UniqueName: \"kubernetes.io/projected/495f6c79-65c2-4b10-8695-1d11ee63fa93-kube-api-access-tjf76\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.348610 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh99m\" (UniqueName: \"kubernetes.io/projected/4a1c1900-32c4-4245-ae03-f71435b2259f-kube-api-access-rh99m\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.349987 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gx6mr" event={"ID":"4a1c1900-32c4-4245-ae03-f71435b2259f","Type":"ContainerDied","Data":"28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.350031 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28acba13008bb270da63ed511c0e3a285a957af1042d37fdb0fad984ac5f4c99" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.350157 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gx6mr" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.351489 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9" (OuterVolumeSpecName: "kube-api-access-vm5q9") pod "656e1694-0fbd-4f0a-b1cd-50594514afd3" (UID: "656e1694-0fbd-4f0a-b1cd-50594514afd3"). InnerVolumeSpecName "kube-api-access-vm5q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.352419 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6nvbl" event={"ID":"1ac29b87-268c-4066-a428-db4d5b5b595f","Type":"ContainerDied","Data":"e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95"} Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.352462 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8737e6fc16f4d763551f8d90b776f13fdc6be44a5799100c20b918d13b87d95" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.352477 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6nvbl" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.449734 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmv9h\" (UniqueName: \"kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h\") pod \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450161 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts\") pod \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\" (UID: \"6bab2e4a-be19-46b8-8f52-f1b59b44c4f6\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txp5m\" (UniqueName: \"kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m\") pod \"5e6372e5-4136-46a8-aee5-181b6277ad6b\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450319 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts\") pod \"5e6372e5-4136-46a8-aee5-181b6277ad6b\" (UID: \"5e6372e5-4136-46a8-aee5-181b6277ad6b\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450351 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts\") pod \"be1d6939-e90a-46a4-87f0-05abc48224b9\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450392 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x869k\" (UniqueName: \"kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k\") pod \"be1d6939-e90a-46a4-87f0-05abc48224b9\" (UID: \"be1d6939-e90a-46a4-87f0-05abc48224b9\") " Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450576 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" (UID: "6bab2e4a-be19-46b8-8f52-f1b59b44c4f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450787 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be1d6939-e90a-46a4-87f0-05abc48224b9" (UID: "be1d6939-e90a-46a4-87f0-05abc48224b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.450810 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e6372e5-4136-46a8-aee5-181b6277ad6b" (UID: "5e6372e5-4136-46a8-aee5-181b6277ad6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.451013 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5q9\" (UniqueName: \"kubernetes.io/projected/656e1694-0fbd-4f0a-b1cd-50594514afd3-kube-api-access-vm5q9\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.451036 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e6372e5-4136-46a8-aee5-181b6277ad6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.451045 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1d6939-e90a-46a4-87f0-05abc48224b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.451055 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.454223 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m" (OuterVolumeSpecName: "kube-api-access-txp5m") pod "5e6372e5-4136-46a8-aee5-181b6277ad6b" (UID: "5e6372e5-4136-46a8-aee5-181b6277ad6b"). InnerVolumeSpecName "kube-api-access-txp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.454414 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h" (OuterVolumeSpecName: "kube-api-access-jmv9h") pod "6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" (UID: "6bab2e4a-be19-46b8-8f52-f1b59b44c4f6"). InnerVolumeSpecName "kube-api-access-jmv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.457492 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k" (OuterVolumeSpecName: "kube-api-access-x869k") pod "be1d6939-e90a-46a4-87f0-05abc48224b9" (UID: "be1d6939-e90a-46a4-87f0-05abc48224b9"). InnerVolumeSpecName "kube-api-access-x869k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.552967 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txp5m\" (UniqueName: \"kubernetes.io/projected/5e6372e5-4136-46a8-aee5-181b6277ad6b-kube-api-access-txp5m\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.553020 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x869k\" (UniqueName: \"kubernetes.io/projected/be1d6939-e90a-46a4-87f0-05abc48224b9-kube-api-access-x869k\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.553039 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmv9h\" (UniqueName: \"kubernetes.io/projected/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6-kube-api-access-jmv9h\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:43 crc kubenswrapper[4754]: I0105 20:27:43.627655 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:27:43 crc kubenswrapper[4754]: W0105 20:27:43.656404 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696009b0_0456_4c36_bc57_9c5ed0d81184.slice/crio-a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd WatchSource:0}: Error finding container a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd: Status 404 returned error can't find the container with id a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.373893 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"652366fc-9032-455e-9e13-b71fd3ff76e3","Type":"ContainerStarted","Data":"ae4b15dfe3b4406fa61cd47db87d8bd53b8736ddd72e34f10e8f54d37a7183ef"} Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.379627 4754 generic.go:334] "Generic (PLEG): container finished" podID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerID="4a49a1ce2215b4b42517c48b3b2f14c617dc624019d8df4ca8197b7a297bc47e" exitCode=0 Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.379698 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" event={"ID":"696009b0-0456-4c36-bc57-9c5ed0d81184","Type":"ContainerDied","Data":"4a49a1ce2215b4b42517c48b3b2f14c617dc624019d8df4ca8197b7a297bc47e"} Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.379728 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" event={"ID":"696009b0-0456-4c36-bc57-9c5ed0d81184","Type":"ContainerStarted","Data":"a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd"} Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.389855 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zhqf4" event={"ID":"687ba708-6c04-4435-9acd-76dfdb4311e2","Type":"ContainerStarted","Data":"704b872819837b61c2ef52843447960b6f9c49f83f1718166f553ba4e1602e04"} Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.412198 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.412180406 podStartE2EDuration="21.412180406s" podCreationTimestamp="2026-01-05 20:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:44.39934659 +0000 UTC m=+1351.108530454" watchObservedRunningTime="2026-01-05 20:27:44.412180406 +0000 UTC m=+1351.121364280" Jan 05 20:27:44 crc kubenswrapper[4754]: I0105 20:27:44.438520 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zhqf4" podStartSLOduration=7.266034823 podStartE2EDuration="15.438501715s" podCreationTimestamp="2026-01-05 20:27:29 +0000 UTC" firstStartedPulling="2026-01-05 20:27:34.965096607 +0000 UTC m=+1341.674280471" lastFinishedPulling="2026-01-05 20:27:43.137563489 +0000 UTC m=+1349.846747363" observedRunningTime="2026-01-05 20:27:44.421214812 +0000 UTC m=+1351.130398686" watchObservedRunningTime="2026-01-05 20:27:44.438501715 +0000 UTC m=+1351.147685589" Jan 05 20:27:45 crc kubenswrapper[4754]: I0105 20:27:45.403585 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" event={"ID":"696009b0-0456-4c36-bc57-9c5ed0d81184","Type":"ContainerStarted","Data":"1a5fe86202b5f228970f16f8a69574c04c19c57c1ba932d6380d4760d82170e4"} Jan 05 20:27:45 crc kubenswrapper[4754]: I0105 20:27:45.405251 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:45 crc kubenswrapper[4754]: I0105 20:27:45.432819 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podStartSLOduration=5.432800414 podStartE2EDuration="5.432800414s" podCreationTimestamp="2026-01-05 20:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:45.428282026 +0000 UTC m=+1352.137465920" watchObservedRunningTime="2026-01-05 20:27:45.432800414 +0000 UTC m=+1352.141984288" Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.109244 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.109580 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.109623 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.110458 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.110504 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6" gracePeriod=600 Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.440012 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6" exitCode=0 Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.440163 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6"} Jan 05 20:27:48 crc kubenswrapper[4754]: I0105 20:27:48.440387 4754 scope.go:117] "RemoveContainer" containerID="856c52ba9bf6dc0e2c3da3888c3ef87d25ee026b1354f7636e400dbe3c2d5919" Jan 05 20:27:49 crc kubenswrapper[4754]: I0105 20:27:49.347515 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:49 crc kubenswrapper[4754]: I0105 20:27:49.461662 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572"} Jan 05 20:27:50 crc kubenswrapper[4754]: I0105 20:27:50.982693 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:27:51 crc kubenswrapper[4754]: I0105 20:27:51.081279 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:27:51 crc kubenswrapper[4754]: I0105 20:27:51.081655 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="dnsmasq-dns" containerID="cri-o://c0595bbd15750527785046a814e9616ce55865a4b6d1ef0785ff18150e721542" gracePeriod=10 Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.636905 4754 generic.go:334] "Generic (PLEG): container finished" podID="4983104d-604b-476c-94bf-b92cf0a887b4" containerID="c0595bbd15750527785046a814e9616ce55865a4b6d1ef0785ff18150e721542" exitCode=0 Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.636974 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" event={"ID":"4983104d-604b-476c-94bf-b92cf0a887b4","Type":"ContainerDied","Data":"c0595bbd15750527785046a814e9616ce55865a4b6d1ef0785ff18150e721542"} Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.637441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" event={"ID":"4983104d-604b-476c-94bf-b92cf0a887b4","Type":"ContainerDied","Data":"ffa741472627265fc8771cc25a1de2a8bf1f9a3d23702c6c8bd819facf13a112"} Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.637466 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa741472627265fc8771cc25a1de2a8bf1f9a3d23702c6c8bd819facf13a112" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.643332 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.728142 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wcps\" (UniqueName: \"kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps\") pod \"4983104d-604b-476c-94bf-b92cf0a887b4\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.728770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config\") pod \"4983104d-604b-476c-94bf-b92cf0a887b4\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.728843 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb\") pod \"4983104d-604b-476c-94bf-b92cf0a887b4\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.728958 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb\") pod \"4983104d-604b-476c-94bf-b92cf0a887b4\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.729023 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc\") pod \"4983104d-604b-476c-94bf-b92cf0a887b4\" (UID: \"4983104d-604b-476c-94bf-b92cf0a887b4\") " Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.735846 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps" (OuterVolumeSpecName: "kube-api-access-6wcps") pod "4983104d-604b-476c-94bf-b92cf0a887b4" (UID: "4983104d-604b-476c-94bf-b92cf0a887b4"). InnerVolumeSpecName "kube-api-access-6wcps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.785084 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config" (OuterVolumeSpecName: "config") pod "4983104d-604b-476c-94bf-b92cf0a887b4" (UID: "4983104d-604b-476c-94bf-b92cf0a887b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.790138 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4983104d-604b-476c-94bf-b92cf0a887b4" (UID: "4983104d-604b-476c-94bf-b92cf0a887b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.803271 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4983104d-604b-476c-94bf-b92cf0a887b4" (UID: "4983104d-604b-476c-94bf-b92cf0a887b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.803860 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4983104d-604b-476c-94bf-b92cf0a887b4" (UID: "4983104d-604b-476c-94bf-b92cf0a887b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.831470 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.831516 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.831534 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.831547 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4983104d-604b-476c-94bf-b92cf0a887b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:52 crc kubenswrapper[4754]: I0105 20:27:52.831560 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wcps\" (UniqueName: \"kubernetes.io/projected/4983104d-604b-476c-94bf-b92cf0a887b4-kube-api-access-6wcps\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:53 crc kubenswrapper[4754]: I0105 20:27:53.662220 4754 generic.go:334] "Generic (PLEG): container finished" podID="687ba708-6c04-4435-9acd-76dfdb4311e2" containerID="704b872819837b61c2ef52843447960b6f9c49f83f1718166f553ba4e1602e04" exitCode=0 Jan 05 20:27:53 crc kubenswrapper[4754]: I0105 20:27:53.662279 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zhqf4" event={"ID":"687ba708-6c04-4435-9acd-76dfdb4311e2","Type":"ContainerDied","Data":"704b872819837b61c2ef52843447960b6f9c49f83f1718166f553ba4e1602e04"} Jan 05 20:27:53 crc kubenswrapper[4754]: I0105 20:27:53.662923 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" Jan 05 20:27:53 crc kubenswrapper[4754]: I0105 20:27:53.734379 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:27:53 crc kubenswrapper[4754]: I0105 20:27:53.745873 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8t74b"] Jan 05 20:27:54 crc kubenswrapper[4754]: I0105 20:27:54.347373 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:54 crc kubenswrapper[4754]: I0105 20:27:54.355495 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:54 crc kubenswrapper[4754]: I0105 20:27:54.681569 4754 generic.go:334] "Generic (PLEG): container finished" podID="08106021-9f77-4a54-8ec2-de2bfe4db63c" containerID="52fe1a81686ee298e7aaaff20bb3e70e2d827582442d4c382c1a2f6d57b860a0" exitCode=0 Jan 05 20:27:54 crc kubenswrapper[4754]: I0105 20:27:54.681609 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8qkw" event={"ID":"08106021-9f77-4a54-8ec2-de2bfe4db63c","Type":"ContainerDied","Data":"52fe1a81686ee298e7aaaff20bb3e70e2d827582442d4c382c1a2f6d57b860a0"} Jan 05 20:27:54 crc kubenswrapper[4754]: I0105 20:27:54.692095 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.232973 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.404951 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle\") pod \"687ba708-6c04-4435-9acd-76dfdb4311e2\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.405165 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data\") pod \"687ba708-6c04-4435-9acd-76dfdb4311e2\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.405217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4p8l\" (UniqueName: \"kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l\") pod \"687ba708-6c04-4435-9acd-76dfdb4311e2\" (UID: \"687ba708-6c04-4435-9acd-76dfdb4311e2\") " Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.412677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l" (OuterVolumeSpecName: "kube-api-access-p4p8l") pod "687ba708-6c04-4435-9acd-76dfdb4311e2" (UID: "687ba708-6c04-4435-9acd-76dfdb4311e2"). InnerVolumeSpecName "kube-api-access-p4p8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.440068 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "687ba708-6c04-4435-9acd-76dfdb4311e2" (UID: "687ba708-6c04-4435-9acd-76dfdb4311e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.475188 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data" (OuterVolumeSpecName: "config-data") pod "687ba708-6c04-4435-9acd-76dfdb4311e2" (UID: "687ba708-6c04-4435-9acd-76dfdb4311e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.517845 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.517905 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687ba708-6c04-4435-9acd-76dfdb4311e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.517920 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4p8l\" (UniqueName: \"kubernetes.io/projected/687ba708-6c04-4435-9acd-76dfdb4311e2-kube-api-access-p4p8l\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.616725 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" path="/var/lib/kubelet/pods/4983104d-604b-476c-94bf-b92cf0a887b4/volumes" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.692057 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zhqf4" event={"ID":"687ba708-6c04-4435-9acd-76dfdb4311e2","Type":"ContainerDied","Data":"93140c66ccd5f0f34e8248a3ea52bb36405a0ab1b59437a9f9d5db6c5dc86870"} Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.692141 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93140c66ccd5f0f34e8248a3ea52bb36405a0ab1b59437a9f9d5db6c5dc86870" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.692095 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zhqf4" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.909943 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910611 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910630 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910641 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687ba708-6c04-4435-9acd-76dfdb4311e2" containerName="keystone-db-sync" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910648 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="687ba708-6c04-4435-9acd-76dfdb4311e2" containerName="keystone-db-sync" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910659 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="init" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910665 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="init" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910677 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656e1694-0fbd-4f0a-b1cd-50594514afd3" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910682 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="656e1694-0fbd-4f0a-b1cd-50594514afd3" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910709 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac29b87-268c-4066-a428-db4d5b5b595f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910715 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac29b87-268c-4066-a428-db4d5b5b595f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910735 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1c1900-32c4-4245-ae03-f71435b2259f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910742 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1c1900-32c4-4245-ae03-f71435b2259f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910754 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="dnsmasq-dns" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910761 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="dnsmasq-dns" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910778 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6372e5-4136-46a8-aee5-181b6277ad6b" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910787 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6372e5-4136-46a8-aee5-181b6277ad6b" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910802 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85825127-1783-4a9b-8205-60891099a53e" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910808 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="85825127-1783-4a9b-8205-60891099a53e" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910816 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1d6939-e90a-46a4-87f0-05abc48224b9" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910822 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1d6939-e90a-46a4-87f0-05abc48224b9" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: E0105 20:27:55.910834 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495f6c79-65c2-4b10-8695-1d11ee63fa93" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.910841 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="495f6c79-65c2-4b10-8695-1d11ee63fa93" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911004 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac29b87-268c-4066-a428-db4d5b5b595f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911015 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1d6939-e90a-46a4-87f0-05abc48224b9" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911033 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6372e5-4136-46a8-aee5-181b6277ad6b" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911041 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="85825127-1783-4a9b-8205-60891099a53e" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911052 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1c1900-32c4-4245-ae03-f71435b2259f" containerName="mariadb-database-create" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911062 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911070 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="495f6c79-65c2-4b10-8695-1d11ee63fa93" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911081 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="dnsmasq-dns" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911092 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="656e1694-0fbd-4f0a-b1cd-50594514afd3" containerName="mariadb-account-create-update" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.911100 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="687ba708-6c04-4435-9acd-76dfdb4311e2" containerName="keystone-db-sync" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.912190 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.946349 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cjz5v"] Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.947722 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.951268 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.951526 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkn46" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.951643 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.951810 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.952004 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.964137 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:27:55 crc kubenswrapper[4754]: I0105 20:27:55.996700 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cjz5v"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029077 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029325 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029446 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029472 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029496 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029526 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029546 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcll\" (UniqueName: \"kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029565 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029591 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029608 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptddj\" (UniqueName: \"kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.029644 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.045322 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-vpqrq"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.046727 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.055410 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.055632 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-58g9m" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.066806 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-vpqrq"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141465 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141896 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141938 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141958 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141975 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.141994 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcll\" (UniqueName: \"kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142064 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptddj\" (UniqueName: \"kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142132 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142171 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142186 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp24s\" (UniqueName: \"kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.142399 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qz8zq"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.143159 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.143798 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.144051 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.149943 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.155102 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.159463 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.159630 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j7kml" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.160104 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.161178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.177071 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.187579 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptddj\" (UniqueName: \"kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj\") pod \"dnsmasq-dns-6f8c45789f-qtvlr\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.188899 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-p84n5"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.190249 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.191093 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.205059 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.207793 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.208135 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x49xt" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.209229 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.218248 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.227039 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qz8zq"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.227711 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcll\" (UniqueName: \"kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.245037 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248182 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248313 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp24s\" (UniqueName: \"kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248499 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248612 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2vn\" (UniqueName: \"kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.248884 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.267855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.269447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.288855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp24s\" (UniqueName: \"kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s\") pod \"heat-db-sync-vpqrq\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.300374 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p84n5"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.316948 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys\") pod \"keystone-bootstrap-cjz5v\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353054 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlb6\" (UniqueName: \"kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353118 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2vn\" (UniqueName: \"kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353163 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353189 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353263 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353279 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353312 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.353391 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.380752 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.383108 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.385411 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vpqrq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlb6\" (UniqueName: \"kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458266 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458371 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458388 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.458404 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.460277 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.476653 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4c48n"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.478376 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.479489 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.493410 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tf8x6" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.493602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.493699 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.501725 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.502960 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.518769 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.523404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlb6\" (UniqueName: \"kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6\") pod \"cinder-db-sync-p84n5\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.547879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2vn\" (UniqueName: \"kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn\") pod \"neutron-db-sync-qz8zq\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.555032 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4c48n"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.562414 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.562470 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.562539 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbrd\" (UniqueName: \"kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.595205 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.608988 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p84n5" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.619954 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.622264 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.631235 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.640463 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xbm2l"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.642875 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.658163 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.658226 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.658392 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xbm2l"] Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.658179 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4fsqs" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.665722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.665788 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.665896 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvbrd\" (UniqueName: \"kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.680595 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.713095 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.723923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvbrd\" (UniqueName: \"kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.734077 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle\") pod \"barbican-db-sync-4c48n\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.786750 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.786910 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26jc\" (UniqueName: \"kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.786960 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.787088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.787583 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.787610 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.825193 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8qkw" event={"ID":"08106021-9f77-4a54-8ec2-de2bfe4db63c","Type":"ContainerDied","Data":"ee7d7a09109fb0078469e99f107e96f463af960c2f77d5d8d12f03d30094e606"} Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.825242 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7d7a09109fb0078469e99f107e96f463af960c2f77d5d8d12f03d30094e606" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.825374 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8qkw" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.848766 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.882801 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:27:56 crc kubenswrapper[4754]: E0105 20:27:56.888035 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08106021-9f77-4a54-8ec2-de2bfe4db63c" containerName="glance-db-sync" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.888061 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="08106021-9f77-4a54-8ec2-de2bfe4db63c" containerName="glance-db-sync" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.888438 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="08106021-9f77-4a54-8ec2-de2bfe4db63c" containerName="glance-db-sync" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.896670 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data\") pod \"08106021-9f77-4a54-8ec2-de2bfe4db63c\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.896716 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data\") pod \"08106021-9f77-4a54-8ec2-de2bfe4db63c\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.896840 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle\") pod \"08106021-9f77-4a54-8ec2-de2bfe4db63c\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.896879 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlcsz\" (UniqueName: \"kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz\") pod \"08106021-9f77-4a54-8ec2-de2bfe4db63c\" (UID: \"08106021-9f77-4a54-8ec2-de2bfe4db63c\") " Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.900162 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.901205 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.901283 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.901349 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.901423 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26jc\" (UniqueName: \"kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.901558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904015 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904120 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clh2r\" (UniqueName: \"kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904219 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.904488 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.905511 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.905510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.906338 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.907228 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.915254 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.915560 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.916894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.925902 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "08106021-9f77-4a54-8ec2-de2bfe4db63c" (UID: "08106021-9f77-4a54-8ec2-de2bfe4db63c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.947331 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz" (OuterVolumeSpecName: "kube-api-access-qlcsz") pod "08106021-9f77-4a54-8ec2-de2bfe4db63c" (UID: "08106021-9f77-4a54-8ec2-de2bfe4db63c"). InnerVolumeSpecName "kube-api-access-qlcsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.954350 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26jc\" (UniqueName: \"kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc\") pod \"dnsmasq-dns-fcfdd6f9f-29jjl\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:56 crc kubenswrapper[4754]: I0105 20:27:56.974383 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.009857 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt4t\" (UniqueName: \"kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010225 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010267 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010676 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010721 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010894 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010963 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clh2r\" (UniqueName: \"kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.010982 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.011006 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.011032 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.011052 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.011225 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlcsz\" (UniqueName: \"kubernetes.io/projected/08106021-9f77-4a54-8ec2-de2bfe4db63c-kube-api-access-qlcsz\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.011240 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.012314 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.017016 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.029424 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.035492 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4c48n" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.035599 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.036572 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clh2r\" (UniqueName: \"kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r\") pod \"placement-db-sync-xbm2l\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.062044 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data" (OuterVolumeSpecName: "config-data") pod "08106021-9f77-4a54-8ec2-de2bfe4db63c" (UID: "08106021-9f77-4a54-8ec2-de2bfe4db63c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.077606 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08106021-9f77-4a54-8ec2-de2bfe4db63c" (UID: "08106021-9f77-4a54-8ec2-de2bfe4db63c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114044 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgt4t\" (UniqueName: \"kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114082 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114149 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114199 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114284 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114354 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.114365 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08106021-9f77-4a54-8ec2-de2bfe4db63c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.115160 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.115206 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.120270 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.121485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.123885 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.135401 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgt4t\" (UniqueName: \"kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.144390 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts\") pod \"ceilometer-0\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.183782 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.200814 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbm2l" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.249011 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.638244 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8t74b" podUID="4983104d-604b-476c-94bf-b92cf0a887b4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.645185 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:27:57 crc kubenswrapper[4754]: W0105 20:27:57.648854 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae22c95_306e_48f4_905e_d701b30f6768.slice/crio-a099afd553d5ce3360483f035f173732e89ae89e3b715f6a03b37ba2d2f65d92 WatchSource:0}: Error finding container a099afd553d5ce3360483f035f173732e89ae89e3b715f6a03b37ba2d2f65d92: Status 404 returned error can't find the container with id a099afd553d5ce3360483f035f173732e89ae89e3b715f6a03b37ba2d2f65d92 Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.653157 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-vpqrq"] Jan 05 20:27:57 crc kubenswrapper[4754]: W0105 20:27:57.657716 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3521f408_9c1b_440b_b7c1_fdc7058f9eb3.slice/crio-55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1 WatchSource:0}: Error finding container 55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1: Status 404 returned error can't find the container with id 55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1 Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.860440 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vpqrq" event={"ID":"3521f408-9c1b-440b-b7c1-fdc7058f9eb3","Type":"ContainerStarted","Data":"55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1"} Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.868441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" event={"ID":"fae22c95-306e-48f4-905e-d701b30f6768","Type":"ContainerStarted","Data":"a099afd553d5ce3360483f035f173732e89ae89e3b715f6a03b37ba2d2f65d92"} Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.959971 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p84n5"] Jan 05 20:27:57 crc kubenswrapper[4754]: I0105 20:27:57.992400 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cjz5v"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.032966 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qz8zq"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.120457 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4c48n"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.158611 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.241030 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.329402 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xbm2l"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.403222 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.445315 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.447098 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.505357 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.583803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.583880 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.583915 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8gr\" (UniqueName: \"kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.583950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.584007 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.584044 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8gr\" (UniqueName: \"kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685511 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685567 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685608 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.685665 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.686636 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.688572 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.691122 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.691263 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.694548 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.695715 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.697616 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.702388 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.702644 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.702757 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vjz5d" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.729923 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.771756 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8gr\" (UniqueName: \"kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr\") pod \"dnsmasq-dns-57c957c4ff-vhwxz\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.919004 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:58 crc kubenswrapper[4754]: I0105 20:27:58.919175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:58.981200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.005746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cvm\" (UniqueName: \"kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.005818 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.005926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.006002 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:58.982914 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.083005 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qz8zq" event={"ID":"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd","Type":"ContainerStarted","Data":"1d096c5057573bca223f8022a7d7cb9e8c1e9b65b37c2fcdb7ab426016f60426"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111145 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cvm\" (UniqueName: \"kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111330 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111362 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111452 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.111499 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.112096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.116376 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.128822 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerStarted","Data":"f163ff263ca383ebdc86f6df5786374aa3cb58f7ea900bd6537cbb547ab7a088"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.133891 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" event={"ID":"e4b1184d-230f-42bf-b872-6e09e2b876b1","Type":"ContainerStarted","Data":"86f469b21dfef74e0689a3f3de28ff7879005fc7a7e950b21cc263e240ba88a5"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.157600 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cjz5v" event={"ID":"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218","Type":"ContainerStarted","Data":"fa8e35eb6df9a22619766d2e11466c678175dd313928d5916109b2dcaf0e0d5c"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.175145 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.176472 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.177220 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.187305 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4c48n" event={"ID":"e9642fa4-a57d-443c-be01-f46706cc0368","Type":"ContainerStarted","Data":"4de81bd7a0ef707269898f1085ee5f1d822f0662a21a087bdee1f6051f7b498e"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.232553 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbm2l" event={"ID":"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f","Type":"ContainerStarted","Data":"0e92d69428cb28671310f41c3862f7314984887da13bea63c2ac042d692d00f5"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.245914 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cvm\" (UniqueName: \"kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.255758 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.255803 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c68df80c09ca2a8e0af8b9c221d9793bb148adba68887bcbeec5db986c7de6/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.267400 4754 generic.go:334] "Generic (PLEG): container finished" podID="fae22c95-306e-48f4-905e-d701b30f6768" containerID="5ce8b46ecca788d3ab73433d348b09953816b3af4a032a7e3e9dc0f8e126f76a" exitCode=0 Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.267463 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" event={"ID":"fae22c95-306e-48f4-905e-d701b30f6768","Type":"ContainerDied","Data":"5ce8b46ecca788d3ab73433d348b09953816b3af4a032a7e3e9dc0f8e126f76a"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.274544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p84n5" event={"ID":"c19555bb-f08e-4ff7-a6d3-26615858d3f3","Type":"ContainerStarted","Data":"a960a1483a9f14c3dfcbf39837324c9d685b79a098ad9b37d907f916c78902fe"} Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.380418 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.415574 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cjz5v" podStartSLOduration=4.415548856 podStartE2EDuration="4.415548856s" podCreationTimestamp="2026-01-05 20:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:27:59.232929933 +0000 UTC m=+1365.942113807" watchObservedRunningTime="2026-01-05 20:27:59.415548856 +0000 UTC m=+1366.124732730" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.499875 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.704210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.881765 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.894409 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.900424 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 20:27:59 crc kubenswrapper[4754]: I0105 20:27:59.906853 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.005923 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006007 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006081 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnvzn\" (UniqueName: \"kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006148 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.006188 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.044889 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.107608 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.107736 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptddj\" (UniqueName: \"kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.107878 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.107909 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.107948 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108457 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb\") pod \"fae22c95-306e-48f4-905e-d701b30f6768\" (UID: \"fae22c95-306e-48f4-905e-d701b30f6768\") " Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108807 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108857 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108878 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnvzn\" (UniqueName: \"kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108954 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.108972 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.109007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.109763 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.111227 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.116007 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.116052 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0601c73c2da0209a9fcf070a1dc03f94eea8e72ac7d4d13dd6d1bb747292aec9/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.130185 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.133007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.136867 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj" (OuterVolumeSpecName: "kube-api-access-ptddj") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "kube-api-access-ptddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.137782 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnvzn\" (UniqueName: \"kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.147126 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.189443 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config" (OuterVolumeSpecName: "config") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.203183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.204189 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.209934 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.210683 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptddj\" (UniqueName: \"kubernetes.io/projected/fae22c95-306e-48f4-905e-d701b30f6768-kube-api-access-ptddj\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.210705 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.210717 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.210730 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.216704 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.233820 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fae22c95-306e-48f4-905e-d701b30f6768" (UID: "fae22c95-306e-48f4-905e-d701b30f6768"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.236572 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.316112 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.316139 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fae22c95-306e-48f4-905e-d701b30f6768-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.316937 4754 generic.go:334] "Generic (PLEG): container finished" podID="e4b1184d-230f-42bf-b872-6e09e2b876b1" containerID="86dad9b77fee446eaacd9eee40e11049528e59f2984a178edef0a447ba3de92a" exitCode=0 Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.316998 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" event={"ID":"e4b1184d-230f-42bf-b872-6e09e2b876b1","Type":"ContainerDied","Data":"86dad9b77fee446eaacd9eee40e11049528e59f2984a178edef0a447ba3de92a"} Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.325060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cjz5v" event={"ID":"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218","Type":"ContainerStarted","Data":"7290e7e5847d9ee55afbfde61ff5b35b1e3d5390ffca00f2e1f5090f6ffb1751"} Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.330191 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.347780 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" event={"ID":"fae22c95-306e-48f4-905e-d701b30f6768","Type":"ContainerDied","Data":"a099afd553d5ce3360483f035f173732e89ae89e3b715f6a03b37ba2d2f65d92"} Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.347857 4754 scope.go:117] "RemoveContainer" containerID="5ce8b46ecca788d3ab73433d348b09953816b3af4a032a7e3e9dc0f8e126f76a" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.348095 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-qtvlr" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.378236 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qz8zq" event={"ID":"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd","Type":"ContainerStarted","Data":"eac04497aeb8509a8ef80fca61e612c081b39a02cfa9ef48f3b2e6cbe94bcae4"} Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.425271 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qz8zq" podStartSLOduration=4.425250902 podStartE2EDuration="4.425250902s" podCreationTimestamp="2026-01-05 20:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:00.412754165 +0000 UTC m=+1367.121938039" watchObservedRunningTime="2026-01-05 20:28:00.425250902 +0000 UTC m=+1367.134434776" Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.580738 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.590978 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-qtvlr"] Jan 05 20:28:00 crc kubenswrapper[4754]: I0105 20:28:00.688788 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:00 crc kubenswrapper[4754]: W0105 20:28:00.783050 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8904d45_6441_4443_8ddf_a645164f88d1.slice/crio-594a42c0453945f3d5d149252b2c55ae0a29dc0afd1b9e83448c7760f801e0d0 WatchSource:0}: Error finding container 594a42c0453945f3d5d149252b2c55ae0a29dc0afd1b9e83448c7760f801e0d0: Status 404 returned error can't find the container with id 594a42c0453945f3d5d149252b2c55ae0a29dc0afd1b9e83448c7760f801e0d0 Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.085693 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158026 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158426 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158551 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158628 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26jc\" (UniqueName: \"kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158738 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.158830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0\") pod \"e4b1184d-230f-42bf-b872-6e09e2b876b1\" (UID: \"e4b1184d-230f-42bf-b872-6e09e2b876b1\") " Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.185611 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc" (OuterVolumeSpecName: "kube-api-access-l26jc") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "kube-api-access-l26jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.196136 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.238250 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.269486 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26jc\" (UniqueName: \"kubernetes.io/projected/e4b1184d-230f-42bf-b872-6e09e2b876b1-kube-api-access-l26jc\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.269523 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.269531 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.272750 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.278981 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.279091 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config" (OuterVolumeSpecName: "config") pod "e4b1184d-230f-42bf-b872-6e09e2b876b1" (UID: "e4b1184d-230f-42bf-b872-6e09e2b876b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.286328 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:01 crc kubenswrapper[4754]: W0105 20:28:01.289544 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ebd774_3303_4fe6_8c33_e07d3debb2ec.slice/crio-9dae9cd7c154ee9250c7bb17619e3e7c9650f17bdb59069342586a648d8fb4f4 WatchSource:0}: Error finding container 9dae9cd7c154ee9250c7bb17619e3e7c9650f17bdb59069342586a648d8fb4f4: Status 404 returned error can't find the container with id 9dae9cd7c154ee9250c7bb17619e3e7c9650f17bdb59069342586a648d8fb4f4 Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.373439 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.373474 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.373504 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b1184d-230f-42bf-b872-6e09e2b876b1-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.429441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerStarted","Data":"594a42c0453945f3d5d149252b2c55ae0a29dc0afd1b9e83448c7760f801e0d0"} Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.437167 4754 generic.go:334] "Generic (PLEG): container finished" podID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerID="6367ed7894b31c7c6488a1589c52807f03f9b3cfa6b361a7c495c145a4fe94d7" exitCode=0 Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.437236 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" event={"ID":"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8","Type":"ContainerDied","Data":"6367ed7894b31c7c6488a1589c52807f03f9b3cfa6b361a7c495c145a4fe94d7"} Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.437265 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" event={"ID":"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8","Type":"ContainerStarted","Data":"ea586fa56e7f18578f3ba984603ea0df8fd9aa92bdfb1216e49c763854f4a3dd"} Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.464464 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" event={"ID":"e4b1184d-230f-42bf-b872-6e09e2b876b1","Type":"ContainerDied","Data":"86f469b21dfef74e0689a3f3de28ff7879005fc7a7e950b21cc263e240ba88a5"} Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.464516 4754 scope.go:117] "RemoveContainer" containerID="86dad9b77fee446eaacd9eee40e11049528e59f2984a178edef0a447ba3de92a" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.464636 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-29jjl" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.482367 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerStarted","Data":"9dae9cd7c154ee9250c7bb17619e3e7c9650f17bdb59069342586a648d8fb4f4"} Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.619704 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae22c95-306e-48f4-905e-d701b30f6768" path="/var/lib/kubelet/pods/fae22c95-306e-48f4-905e-d701b30f6768/volumes" Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.645039 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:28:01 crc kubenswrapper[4754]: I0105 20:28:01.658967 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-29jjl"] Jan 05 20:28:02 crc kubenswrapper[4754]: I0105 20:28:02.512377 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerStarted","Data":"95265c4cca8dc61daaedc72c3dd53830a039b5450e7ff205e06cb2a8afbd4d71"} Jan 05 20:28:02 crc kubenswrapper[4754]: I0105 20:28:02.515759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" event={"ID":"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8","Type":"ContainerStarted","Data":"548abe8522722aed6df97d7456eb0316018f24ad41f61ec974311a701c2b507f"} Jan 05 20:28:02 crc kubenswrapper[4754]: I0105 20:28:02.517072 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:28:02 crc kubenswrapper[4754]: I0105 20:28:02.546332 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" podStartSLOduration=4.546311935 podStartE2EDuration="4.546311935s" podCreationTimestamp="2026-01-05 20:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:02.534920797 +0000 UTC m=+1369.244104671" watchObservedRunningTime="2026-01-05 20:28:02.546311935 +0000 UTC m=+1369.255495809" Jan 05 20:28:03 crc kubenswrapper[4754]: I0105 20:28:03.538368 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerStarted","Data":"a9b604560644df6b952a53112fc5a4fbef8fe3150571d412f896cb49a1e9edf6"} Jan 05 20:28:03 crc kubenswrapper[4754]: I0105 20:28:03.607046 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b1184d-230f-42bf-b872-6e09e2b876b1" path="/var/lib/kubelet/pods/e4b1184d-230f-42bf-b872-6e09e2b876b1/volumes" Jan 05 20:28:04 crc kubenswrapper[4754]: I0105 20:28:04.560403 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerStarted","Data":"1cd9038976e3199528a784fef7c736904ccf640eb660de043342e50892ea6709"} Jan 05 20:28:04 crc kubenswrapper[4754]: I0105 20:28:04.566339 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerStarted","Data":"4e309d3542deaecd07fa5cf0af366fd174970d36bf7f0f94caac6d8f6a2efaa4"} Jan 05 20:28:04 crc kubenswrapper[4754]: I0105 20:28:04.595124 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.595106167 podStartE2EDuration="6.595106167s" podCreationTimestamp="2026-01-05 20:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:04.592421007 +0000 UTC m=+1371.301604881" watchObservedRunningTime="2026-01-05 20:28:04.595106167 +0000 UTC m=+1371.304290041" Jan 05 20:28:04 crc kubenswrapper[4754]: I0105 20:28:04.646595 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.646574405 podStartE2EDuration="6.646574405s" podCreationTimestamp="2026-01-05 20:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:04.631677915 +0000 UTC m=+1371.340861799" watchObservedRunningTime="2026-01-05 20:28:04.646574405 +0000 UTC m=+1371.355758279" Jan 05 20:28:05 crc kubenswrapper[4754]: I0105 20:28:05.577256 4754 generic.go:334] "Generic (PLEG): container finished" podID="bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" containerID="7290e7e5847d9ee55afbfde61ff5b35b1e3d5390ffca00f2e1f5090f6ffb1751" exitCode=0 Jan 05 20:28:05 crc kubenswrapper[4754]: I0105 20:28:05.578690 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cjz5v" event={"ID":"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218","Type":"ContainerDied","Data":"7290e7e5847d9ee55afbfde61ff5b35b1e3d5390ffca00f2e1f5090f6ffb1751"} Jan 05 20:28:05 crc kubenswrapper[4754]: I0105 20:28:05.983669 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:06 crc kubenswrapper[4754]: I0105 20:28:06.043649 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:06 crc kubenswrapper[4754]: I0105 20:28:06.587869 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-log" containerID="cri-o://95265c4cca8dc61daaedc72c3dd53830a039b5450e7ff205e06cb2a8afbd4d71" gracePeriod=30 Jan 05 20:28:06 crc kubenswrapper[4754]: I0105 20:28:06.588008 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-httpd" containerID="cri-o://1cd9038976e3199528a784fef7c736904ccf640eb660de043342e50892ea6709" gracePeriod=30 Jan 05 20:28:06 crc kubenswrapper[4754]: I0105 20:28:06.588384 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-log" containerID="cri-o://a9b604560644df6b952a53112fc5a4fbef8fe3150571d412f896cb49a1e9edf6" gracePeriod=30 Jan 05 20:28:06 crc kubenswrapper[4754]: I0105 20:28:06.588763 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-httpd" containerID="cri-o://4e309d3542deaecd07fa5cf0af366fd174970d36bf7f0f94caac6d8f6a2efaa4" gracePeriod=30 Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.608347 4754 generic.go:334] "Generic (PLEG): container finished" podID="e8904d45-6441-4443-8ddf-a645164f88d1" containerID="1cd9038976e3199528a784fef7c736904ccf640eb660de043342e50892ea6709" exitCode=0 Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.608754 4754 generic.go:334] "Generic (PLEG): container finished" podID="e8904d45-6441-4443-8ddf-a645164f88d1" containerID="95265c4cca8dc61daaedc72c3dd53830a039b5450e7ff205e06cb2a8afbd4d71" exitCode=143 Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.612034 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerDied","Data":"1cd9038976e3199528a784fef7c736904ccf640eb660de043342e50892ea6709"} Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.612067 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerDied","Data":"95265c4cca8dc61daaedc72c3dd53830a039b5450e7ff205e06cb2a8afbd4d71"} Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.613214 4754 generic.go:334] "Generic (PLEG): container finished" podID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerID="4e309d3542deaecd07fa5cf0af366fd174970d36bf7f0f94caac6d8f6a2efaa4" exitCode=0 Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.613237 4754 generic.go:334] "Generic (PLEG): container finished" podID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerID="a9b604560644df6b952a53112fc5a4fbef8fe3150571d412f896cb49a1e9edf6" exitCode=143 Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.613253 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerDied","Data":"4e309d3542deaecd07fa5cf0af366fd174970d36bf7f0f94caac6d8f6a2efaa4"} Jan 05 20:28:07 crc kubenswrapper[4754]: I0105 20:28:07.613269 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerDied","Data":"a9b604560644df6b952a53112fc5a4fbef8fe3150571d412f896cb49a1e9edf6"} Jan 05 20:28:08 crc kubenswrapper[4754]: I0105 20:28:08.935042 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:28:08 crc kubenswrapper[4754]: I0105 20:28:08.985613 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034188 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034344 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034393 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034435 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034473 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.034534 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcll\" (UniqueName: \"kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll\") pod \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\" (UID: \"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218\") " Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.049034 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll" (OuterVolumeSpecName: "kube-api-access-lwcll") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "kube-api-access-lwcll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.052034 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.052265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.067834 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts" (OuterVolumeSpecName: "scripts") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.090320 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.091967 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.092244 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" containerID="cri-o://1a5fe86202b5f228970f16f8a69574c04c19c57c1ba932d6380d4760d82170e4" gracePeriod=10 Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.092725 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data" (OuterVolumeSpecName: "config-data") pod "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" (UID: "bbb4b9d3-4443-4f5d-bcde-5bcafeed2218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.140830 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.140858 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.140872 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.140880 4754 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.141599 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.141613 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwcll\" (UniqueName: \"kubernetes.io/projected/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218-kube-api-access-lwcll\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.664320 4754 generic.go:334] "Generic (PLEG): container finished" podID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerID="1a5fe86202b5f228970f16f8a69574c04c19c57c1ba932d6380d4760d82170e4" exitCode=0 Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.664870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" event={"ID":"696009b0-0456-4c36-bc57-9c5ed0d81184","Type":"ContainerDied","Data":"1a5fe86202b5f228970f16f8a69574c04c19c57c1ba932d6380d4760d82170e4"} Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.673475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cjz5v" event={"ID":"bbb4b9d3-4443-4f5d-bcde-5bcafeed2218","Type":"ContainerDied","Data":"fa8e35eb6df9a22619766d2e11466c678175dd313928d5916109b2dcaf0e0d5c"} Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.673536 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa8e35eb6df9a22619766d2e11466c678175dd313928d5916109b2dcaf0e0d5c" Jan 05 20:28:09 crc kubenswrapper[4754]: I0105 20:28:09.673601 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cjz5v" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.044514 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cjz5v"] Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.055214 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cjz5v"] Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.122966 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k46bt"] Jan 05 20:28:10 crc kubenswrapper[4754]: E0105 20:28:10.123555 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" containerName="keystone-bootstrap" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123579 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" containerName="keystone-bootstrap" Jan 05 20:28:10 crc kubenswrapper[4754]: E0105 20:28:10.123596 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae22c95-306e-48f4-905e-d701b30f6768" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123604 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae22c95-306e-48f4-905e-d701b30f6768" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: E0105 20:28:10.123616 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b1184d-230f-42bf-b872-6e09e2b876b1" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123622 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b1184d-230f-42bf-b872-6e09e2b876b1" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123853 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae22c95-306e-48f4-905e-d701b30f6768" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123871 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b1184d-230f-42bf-b872-6e09e2b876b1" containerName="init" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.123890 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" containerName="keystone-bootstrap" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.124695 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.127810 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.128071 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.128185 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.128942 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.129452 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkn46" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.138413 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k46bt"] Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271533 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h59x\" (UniqueName: \"kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271630 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271737 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.271772 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378019 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h59x\" (UniqueName: \"kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.378466 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.395193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.396696 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.402555 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h59x\" (UniqueName: \"kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.407922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.408939 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.417159 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys\") pod \"keystone-bootstrap-k46bt\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.448813 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:10 crc kubenswrapper[4754]: I0105 20:28:10.982523 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 05 20:28:11 crc kubenswrapper[4754]: I0105 20:28:11.606552 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb4b9d3-4443-4f5d-bcde-5bcafeed2218" path="/var/lib/kubelet/pods/bbb4b9d3-4443-4f5d-bcde-5bcafeed2218/volumes" Jan 05 20:28:15 crc kubenswrapper[4754]: E0105 20:28:15.115149 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 05 20:28:15 crc kubenswrapper[4754]: E0105 20:28:15.116126 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clh2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-xbm2l_openstack(32c0a4c6-d974-402e-bdf8-13c8b2e91b3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:28:15 crc kubenswrapper[4754]: E0105 20:28:15.117993 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-xbm2l" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" Jan 05 20:28:15 crc kubenswrapper[4754]: E0105 20:28:15.765633 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-xbm2l" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" Jan 05 20:28:15 crc kubenswrapper[4754]: I0105 20:28:15.981603 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 05 20:28:20 crc kubenswrapper[4754]: I0105 20:28:20.981856 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 05 20:28:20 crc kubenswrapper[4754]: I0105 20:28:20.983085 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:28:24 crc kubenswrapper[4754]: I0105 20:28:24.888112 4754 generic.go:334] "Generic (PLEG): container finished" podID="0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" containerID="eac04497aeb8509a8ef80fca61e612c081b39a02cfa9ef48f3b2e6cbe94bcae4" exitCode=0 Jan 05 20:28:24 crc kubenswrapper[4754]: I0105 20:28:24.888343 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qz8zq" event={"ID":"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd","Type":"ContainerDied","Data":"eac04497aeb8509a8ef80fca61e612c081b39a02cfa9ef48f3b2e6cbe94bcae4"} Jan 05 20:28:25 crc kubenswrapper[4754]: I0105 20:28:25.982315 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 05 20:28:29 crc kubenswrapper[4754]: I0105 20:28:29.704663 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:28:29 crc kubenswrapper[4754]: I0105 20:28:29.705008 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.330288 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.330402 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.924091 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.931819 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947233 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnvzn\" (UniqueName: \"kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947318 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947426 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947509 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947678 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947775 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947826 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs\") pod \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\" (UID: \"14ebd774-3303-4fe6-8c33-e07d3debb2ec\") " Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.947945 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.950796 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs" (OuterVolumeSpecName: "logs") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.953167 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.953412 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ebd774-3303-4fe6-8c33-e07d3debb2ec-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.969590 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts" (OuterVolumeSpecName: "scripts") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.982448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14ebd774-3303-4fe6-8c33-e07d3debb2ec","Type":"ContainerDied","Data":"9dae9cd7c154ee9250c7bb17619e3e7c9650f17bdb59069342586a648d8fb4f4"} Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.982514 4754 scope.go:117] "RemoveContainer" containerID="4e309d3542deaecd07fa5cf0af366fd174970d36bf7f0f94caac6d8f6a2efaa4" Jan 05 20:28:30 crc kubenswrapper[4754]: I0105 20:28:30.982523 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.000742 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e8904d45-6441-4443-8ddf-a645164f88d1","Type":"ContainerDied","Data":"594a42c0453945f3d5d149252b2c55ae0a29dc0afd1b9e83448c7760f801e0d0"} Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.000852 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.005472 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn" (OuterVolumeSpecName: "kube-api-access-nnvzn") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "kube-api-access-nnvzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.002141 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357" (OuterVolumeSpecName: "glance") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.024561 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055159 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055550 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cvm\" (UniqueName: \"kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055643 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055694 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055928 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.055990 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.056186 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"e8904d45-6441-4443-8ddf-a645164f88d1\" (UID: \"e8904d45-6441-4443-8ddf-a645164f88d1\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.056827 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.056852 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnvzn\" (UniqueName: \"kubernetes.io/projected/14ebd774-3303-4fe6-8c33-e07d3debb2ec-kube-api-access-nnvzn\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.056866 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.056892 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") on node \"crc\" " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.057363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.058846 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs" (OuterVolumeSpecName: "logs") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.062003 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts" (OuterVolumeSpecName: "scripts") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.065648 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm" (OuterVolumeSpecName: "kube-api-access-q5cvm") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "kube-api-access-q5cvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.090186 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data" (OuterVolumeSpecName: "config-data") pod "14ebd774-3303-4fe6-8c33-e07d3debb2ec" (UID: "14ebd774-3303-4fe6-8c33-e07d3debb2ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.116075 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279" (OuterVolumeSpecName: "glance") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "pvc-5e52a243-20bd-4eec-aa78-a75239804279". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.122071 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.132960 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.133117 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357") on node "crc" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159258 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ebd774-3303-4fe6-8c33-e07d3debb2ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159308 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159346 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") on node \"crc\" " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159359 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159371 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5cvm\" (UniqueName: \"kubernetes.io/projected/e8904d45-6441-4443-8ddf-a645164f88d1-kube-api-access-q5cvm\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159383 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159392 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.159402 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8904d45-6441-4443-8ddf-a645164f88d1-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.169156 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data" (OuterVolumeSpecName: "config-data") pod "e8904d45-6441-4443-8ddf-a645164f88d1" (UID: "e8904d45-6441-4443-8ddf-a645164f88d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.183394 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.183559 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5e52a243-20bd-4eec-aa78-a75239804279" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279") on node "crc" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.262022 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8904d45-6441-4443-8ddf-a645164f88d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.262058 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.324254 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.389275 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.415463 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.416864 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.416882 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.416891 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.416897 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.416911 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.416916 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.416927 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.416933 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.417149 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.417163 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.417178 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-log" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.417186 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" containerName="glance-httpd" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.418315 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.421233 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.421411 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vjz5d" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.423568 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.424273 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 05 20:28:31 crc kubenswrapper[4754]: E0105 20:28:31.424428 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54bh578h59dh9h55h5dfh5fch579hc4h64dh584h67dhd7h64h5d7h589hb4h646h5dh57bh5ch55fhfchb4h9dh698h8bh7hfdh554h87h6dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgt4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e4349961-956a-4a53-a1a5-11fcffcbd0f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.424663 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.433415 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.449505 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.471854 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.487531 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.489602 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.490959 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.493093 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.493422 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.521733 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575615 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575716 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575875 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcn5\" (UniqueName: \"kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.575964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.613795 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ebd774-3303-4fe6-8c33-e07d3debb2ec" path="/var/lib/kubelet/pods/14ebd774-3303-4fe6-8c33-e07d3debb2ec/volumes" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.638641 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8904d45-6441-4443-8ddf-a645164f88d1" path="/var/lib/kubelet/pods/e8904d45-6441-4443-8ddf-a645164f88d1/volumes" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.678328 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config\") pod \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.678491 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2vn\" (UniqueName: \"kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn\") pod \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.678643 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle\") pod \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\" (UID: \"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd\") " Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.678914 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.678969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679001 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679046 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679098 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679125 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679150 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679213 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krbkb\" (UniqueName: \"kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679261 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcn5\" (UniqueName: \"kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679317 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679338 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679360 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679417 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.679808 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.681807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.695233 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.706729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.735757 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.739068 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.748018 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.748078 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0601c73c2da0209a9fcf070a1dc03f94eea8e72ac7d4d13dd6d1bb747292aec9/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.748130 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcn5\" (UniqueName: \"kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.748373 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn" (OuterVolumeSpecName: "kube-api-access-hx2vn") pod "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" (UID: "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd"). InnerVolumeSpecName "kube-api-access-hx2vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.775572 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config" (OuterVolumeSpecName: "config") pod "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" (UID: "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784615 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krbkb\" (UniqueName: \"kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784748 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784798 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784833 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784906 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.784969 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.785007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.785114 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2vn\" (UniqueName: \"kubernetes.io/projected/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-kube-api-access-hx2vn\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.785142 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.786416 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" (UID: "0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.790920 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.795271 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.803230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.805844 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.807019 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.809437 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.829974 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krbkb\" (UniqueName: \"kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.852278 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.852342 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c68df80c09ca2a8e0af8b9c221d9793bb148adba68887bcbeec5db986c7de6/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.889897 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.896892 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:28:31 crc kubenswrapper[4754]: I0105 20:28:31.902239 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " pod="openstack/glance-default-external-api-0" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.026164 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qz8zq" event={"ID":"0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd","Type":"ContainerDied","Data":"1d096c5057573bca223f8022a7d7cb9e8c1e9b65b37c2fcdb7ab426016f60426"} Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.026205 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d096c5057573bca223f8022a7d7cb9e8c1e9b65b37c2fcdb7ab426016f60426" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.026233 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qz8zq" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.037324 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.114174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.776518 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:28:32 crc kubenswrapper[4754]: E0105 20:28:32.778020 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" containerName="neutron-db-sync" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.778039 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" containerName="neutron-db-sync" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.778258 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" containerName="neutron-db-sync" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.779268 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.806710 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.884625 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.886991 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.893401 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j7kml" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.893818 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.894198 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.894443 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.920585 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923390 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923587 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mt6\" (UniqueName: \"kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923630 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923650 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:28:32 crc kubenswrapper[4754]: I0105 20:28:32.923755 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.026566 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqbq\" (UniqueName: \"kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.026645 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.026678 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.026746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.026990 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.027323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.027453 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.027592 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.027840 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.027999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8mt6\" (UniqueName: \"kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.028052 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.028089 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.029112 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.029127 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.029447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.031595 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.049084 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8mt6\" (UniqueName: \"kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6\") pod \"dnsmasq-dns-5ccc5c4795-wkf6w\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.126417 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.129972 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.130063 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqbq\" (UniqueName: \"kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.130098 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.130128 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.130155 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.138635 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.139193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.140460 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.145943 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.156981 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqbq\" (UniqueName: \"kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq\") pod \"neutron-7656f8454b-v7vj9\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:33 crc kubenswrapper[4754]: I0105 20:28:33.220125 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:34 crc kubenswrapper[4754]: E0105 20:28:34.642057 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 05 20:28:34 crc kubenswrapper[4754]: E0105 20:28:34.642821 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stlb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-p84n5_openstack(c19555bb-f08e-4ff7-a6d3-26615858d3f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:28:34 crc kubenswrapper[4754]: E0105 20:28:34.645371 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-p84n5" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" Jan 05 20:28:34 crc kubenswrapper[4754]: I0105 20:28:34.971263 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77dc856cfc-xjgrw"] Jan 05 20:28:34 crc kubenswrapper[4754]: I0105 20:28:34.973809 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:34 crc kubenswrapper[4754]: I0105 20:28:34.976825 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 05 20:28:34 crc kubenswrapper[4754]: I0105 20:28:34.977126 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.001677 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77dc856cfc-xjgrw"] Jan 05 20:28:35 crc kubenswrapper[4754]: E0105 20:28:35.003236 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 05 20:28:35 crc kubenswrapper[4754]: E0105 20:28:35.003392 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hp24s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-vpqrq_openstack(3521f408-9c1b-440b-b7c1-fdc7058f9eb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:28:35 crc kubenswrapper[4754]: E0105 20:28:35.004535 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-vpqrq" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.056914 4754 scope.go:117] "RemoveContainer" containerID="a9b604560644df6b952a53112fc5a4fbef8fe3150571d412f896cb49a1e9edf6" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-internal-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjfd\" (UniqueName: \"kubernetes.io/projected/6903533f-d965-4b1f-81b1-630dda816dbc-kube-api-access-tfjfd\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082663 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082723 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-httpd-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082744 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-combined-ca-bundle\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-public-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.082819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-ovndb-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.088349 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" event={"ID":"696009b0-0456-4c36-bc57-9c5ed0d81184","Type":"ContainerDied","Data":"a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd"} Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.088428 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4080f674513a06ab4922e6794eb4090e524e61781a88eb65da60375ca4c45bd" Jan 05 20:28:35 crc kubenswrapper[4754]: E0105 20:28:35.114092 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-p84n5" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" Jan 05 20:28:35 crc kubenswrapper[4754]: E0105 20:28:35.131390 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-vpqrq" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.142486 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.185051 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-public-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.185563 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-ovndb-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.185711 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-internal-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.185755 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfjfd\" (UniqueName: \"kubernetes.io/projected/6903533f-d965-4b1f-81b1-630dda816dbc-kube-api-access-tfjfd\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.185781 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.207527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-httpd-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.207636 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-combined-ca-bundle\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.219104 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.220391 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-httpd-config\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.222479 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-ovndb-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.223537 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-public-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.224184 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-combined-ca-bundle\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.224823 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6903533f-d965-4b1f-81b1-630dda816dbc-internal-tls-certs\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.229025 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfjfd\" (UniqueName: \"kubernetes.io/projected/6903533f-d965-4b1f-81b1-630dda816dbc-kube-api-access-tfjfd\") pod \"neutron-77dc856cfc-xjgrw\" (UID: \"6903533f-d965-4b1f-81b1-630dda816dbc\") " pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.242257 4754 scope.go:117] "RemoveContainer" containerID="1cd9038976e3199528a784fef7c736904ccf640eb660de043342e50892ea6709" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.310745 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.310813 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.310966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.310991 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.311040 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.311075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvln4\" (UniqueName: \"kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4\") pod \"696009b0-0456-4c36-bc57-9c5ed0d81184\" (UID: \"696009b0-0456-4c36-bc57-9c5ed0d81184\") " Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.329555 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4" (OuterVolumeSpecName: "kube-api-access-dvln4") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "kube-api-access-dvln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.392117 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.393272 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.399285 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.408820 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config" (OuterVolumeSpecName: "config") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.412650 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "696009b0-0456-4c36-bc57-9c5ed0d81184" (UID: "696009b0-0456-4c36-bc57-9c5ed0d81184"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416029 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416062 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416103 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416113 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416121 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/696009b0-0456-4c36-bc57-9c5ed0d81184-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.416130 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvln4\" (UniqueName: \"kubernetes.io/projected/696009b0-0456-4c36-bc57-9c5ed0d81184-kube-api-access-dvln4\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.440263 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.441136 4754 scope.go:117] "RemoveContainer" containerID="95265c4cca8dc61daaedc72c3dd53830a039b5450e7ff205e06cb2a8afbd4d71" Jan 05 20:28:35 crc kubenswrapper[4754]: W0105 20:28:35.780654 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69a9c7cb_a8f4_4099_a4b4_519cfca0a62e.slice/crio-fe13a2e7f38e65930426e0f328136f5e2179323d81aaa6e7791de8bef88aa713 WatchSource:0}: Error finding container fe13a2e7f38e65930426e0f328136f5e2179323d81aaa6e7791de8bef88aa713: Status 404 returned error can't find the container with id fe13a2e7f38e65930426e0f328136f5e2179323d81aaa6e7791de8bef88aa713 Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.791479 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.805646 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.812269 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k46bt"] Jan 05 20:28:35 crc kubenswrapper[4754]: I0105 20:28:35.982053 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.094063 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.107774 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k46bt" event={"ID":"de4b9353-df59-4a87-80f7-26a1d1637032","Type":"ContainerStarted","Data":"eef3dfece0465947236cc8ddb8d75eb0c832c3fbcd43f0e8e6c8f012e4c0abae"} Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.109394 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4c48n" event={"ID":"e9642fa4-a57d-443c-be01-f46706cc0368","Type":"ContainerStarted","Data":"d38b6e4e623d207f39e0ad135bfb3e2e049ab9fa794850c982d0edbcbf0c8543"} Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.112720 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" event={"ID":"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e","Type":"ContainerStarted","Data":"fe13a2e7f38e65930426e0f328136f5e2179323d81aaa6e7791de8bef88aa713"} Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.114095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbm2l" event={"ID":"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f","Type":"ContainerStarted","Data":"dd6c6f551876be27433fe4319c7ec6af0c33eded6d380725912104ff1fc0d498"} Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.114139 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gjlm4" Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.132346 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4c48n" podStartSLOduration=3.255248659 podStartE2EDuration="40.132324247s" podCreationTimestamp="2026-01-05 20:27:56 +0000 UTC" firstStartedPulling="2026-01-05 20:27:58.121590486 +0000 UTC m=+1364.830774360" lastFinishedPulling="2026-01-05 20:28:34.998666054 +0000 UTC m=+1401.707849948" observedRunningTime="2026-01-05 20:28:36.121636337 +0000 UTC m=+1402.830820211" watchObservedRunningTime="2026-01-05 20:28:36.132324247 +0000 UTC m=+1402.841508121" Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.145419 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xbm2l" podStartSLOduration=3.263830072 podStartE2EDuration="40.145400539s" podCreationTimestamp="2026-01-05 20:27:56 +0000 UTC" firstStartedPulling="2026-01-05 20:27:58.39783158 +0000 UTC m=+1365.107015454" lastFinishedPulling="2026-01-05 20:28:35.279402047 +0000 UTC m=+1401.988585921" observedRunningTime="2026-01-05 20:28:36.136848155 +0000 UTC m=+1402.846032019" watchObservedRunningTime="2026-01-05 20:28:36.145400539 +0000 UTC m=+1402.854584413" Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.161137 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.171051 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gjlm4"] Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.191508 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.342588 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77dc856cfc-xjgrw"] Jan 05 20:28:36 crc kubenswrapper[4754]: W0105 20:28:36.595385 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9047860d_2da5_476b_9340_38244322fb95.slice/crio-a4b1d80274944e9b04a59521277af91bca0fe79633d45231e3524c9ca5f87cff WatchSource:0}: Error finding container a4b1d80274944e9b04a59521277af91bca0fe79633d45231e3524c9ca5f87cff: Status 404 returned error can't find the container with id a4b1d80274944e9b04a59521277af91bca0fe79633d45231e3524c9ca5f87cff Jan 05 20:28:36 crc kubenswrapper[4754]: W0105 20:28:36.601507 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd877e0f6_019c_4b97_8a36_9a210b9e4233.slice/crio-c5d5c7325432a6f169d2bf791b50ab76eb1dc49ed7101ecf03014624e6b68dbe WatchSource:0}: Error finding container c5d5c7325432a6f169d2bf791b50ab76eb1dc49ed7101ecf03014624e6b68dbe: Status 404 returned error can't find the container with id c5d5c7325432a6f169d2bf791b50ab76eb1dc49ed7101ecf03014624e6b68dbe Jan 05 20:28:36 crc kubenswrapper[4754]: W0105 20:28:36.605944 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6903533f_d965_4b1f_81b1_630dda816dbc.slice/crio-38860e86c36247a422fb401dde0b5dda01e580e8df771b34a413cc02b66cca8b WatchSource:0}: Error finding container 38860e86c36247a422fb401dde0b5dda01e580e8df771b34a413cc02b66cca8b: Status 404 returned error can't find the container with id 38860e86c36247a422fb401dde0b5dda01e580e8df771b34a413cc02b66cca8b Jan 05 20:28:36 crc kubenswrapper[4754]: I0105 20:28:36.987706 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.148314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerStarted","Data":"c5d5c7325432a6f169d2bf791b50ab76eb1dc49ed7101ecf03014624e6b68dbe"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.150417 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerStarted","Data":"daae67a095df97118b314bf757e5e6c0395694dec3e2edf02e902c90058c068a"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.156938 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerStarted","Data":"c077c60356d4c8453a24a113c03002f3ce4ea7be083f9569f55ee21f68e27fff"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.158769 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k46bt" event={"ID":"de4b9353-df59-4a87-80f7-26a1d1637032","Type":"ContainerStarted","Data":"9063940e8012108d9397da061fcc85cfc29a032e1410b1101e932fa332d143ff"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.164830 4754 generic.go:334] "Generic (PLEG): container finished" podID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerID="2b0e827bee21c87ad5e40036f0fd948987a6b1282d54bfa02de73460e271d81d" exitCode=0 Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.164881 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" event={"ID":"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e","Type":"ContainerDied","Data":"2b0e827bee21c87ad5e40036f0fd948987a6b1282d54bfa02de73460e271d81d"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.167562 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dc856cfc-xjgrw" event={"ID":"6903533f-d965-4b1f-81b1-630dda816dbc","Type":"ContainerStarted","Data":"833749fd52fedcca325b691058057b0daeb7aff9f70bd944aa48b34381764790"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.167605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dc856cfc-xjgrw" event={"ID":"6903533f-d965-4b1f-81b1-630dda816dbc","Type":"ContainerStarted","Data":"38860e86c36247a422fb401dde0b5dda01e580e8df771b34a413cc02b66cca8b"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.170805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerStarted","Data":"a4b1d80274944e9b04a59521277af91bca0fe79633d45231e3524c9ca5f87cff"} Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.199330 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k46bt" podStartSLOduration=27.199311533 podStartE2EDuration="27.199311533s" podCreationTimestamp="2026-01-05 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:37.176712481 +0000 UTC m=+1403.885896355" watchObservedRunningTime="2026-01-05 20:28:37.199311533 +0000 UTC m=+1403.908495477" Jan 05 20:28:37 crc kubenswrapper[4754]: I0105 20:28:37.614756 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" path="/var/lib/kubelet/pods/696009b0-0456-4c36-bc57-9c5ed0d81184/volumes" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.186906 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dc856cfc-xjgrw" event={"ID":"6903533f-d965-4b1f-81b1-630dda816dbc","Type":"ContainerStarted","Data":"b44b737ebeb1d7ce3f582de94efa1c31949ac439a08700d22f0e2d1910583b49"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.187495 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.190273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerStarted","Data":"0edd94ba41a347f0172d53761a6a79a528ca3c33bbf6bef6f31f6c7ac467566c"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.190333 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerStarted","Data":"e3c82caeb13d5b2201dbea5bb4bf5d07fbf194feb2ec22a360be7d681501d9b8"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.199459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerStarted","Data":"38fc911195a2fddb8faae4d68aa93e6c536d2633ca852d8558b90ab47cda31ac"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.199503 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerStarted","Data":"146d9ffc3b8e9ece5f9e2721869ea9a8b5e8fa909e02c39efdf21d47085a2c30"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.208870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerStarted","Data":"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.208960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerStarted","Data":"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.209620 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.222434 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77dc856cfc-xjgrw" podStartSLOduration=4.222409308 podStartE2EDuration="4.222409308s" podCreationTimestamp="2026-01-05 20:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:38.207748374 +0000 UTC m=+1404.916932248" watchObservedRunningTime="2026-01-05 20:28:38.222409308 +0000 UTC m=+1404.931593182" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.224059 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" event={"ID":"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e","Type":"ContainerStarted","Data":"13dc15e223e55fee6944a7c78e56e5be146b36d9556913d5ec9a13b65fb81849"} Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.224139 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.242128 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.242084283 podStartE2EDuration="7.242084283s" podCreationTimestamp="2026-01-05 20:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:38.237946015 +0000 UTC m=+1404.947129889" watchObservedRunningTime="2026-01-05 20:28:38.242084283 +0000 UTC m=+1404.951268157" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.269110 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.26908513 podStartE2EDuration="7.26908513s" podCreationTimestamp="2026-01-05 20:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:38.265598249 +0000 UTC m=+1404.974782123" watchObservedRunningTime="2026-01-05 20:28:38.26908513 +0000 UTC m=+1404.978269004" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.335587 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7656f8454b-v7vj9" podStartSLOduration=6.335565181 podStartE2EDuration="6.335565181s" podCreationTimestamp="2026-01-05 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:38.297380641 +0000 UTC m=+1405.006564515" watchObservedRunningTime="2026-01-05 20:28:38.335565181 +0000 UTC m=+1405.044749045" Jan 05 20:28:38 crc kubenswrapper[4754]: I0105 20:28:38.339836 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" podStartSLOduration=6.339823613 podStartE2EDuration="6.339823613s" podCreationTimestamp="2026-01-05 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:38.324108321 +0000 UTC m=+1405.033292195" watchObservedRunningTime="2026-01-05 20:28:38.339823613 +0000 UTC m=+1405.049007487" Jan 05 20:28:39 crc kubenswrapper[4754]: I0105 20:28:39.239351 4754 generic.go:334] "Generic (PLEG): container finished" podID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" containerID="dd6c6f551876be27433fe4319c7ec6af0c33eded6d380725912104ff1fc0d498" exitCode=0 Jan 05 20:28:39 crc kubenswrapper[4754]: I0105 20:28:39.239442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbm2l" event={"ID":"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f","Type":"ContainerDied","Data":"dd6c6f551876be27433fe4319c7ec6af0c33eded6d380725912104ff1fc0d498"} Jan 05 20:28:40 crc kubenswrapper[4754]: I0105 20:28:40.253896 4754 generic.go:334] "Generic (PLEG): container finished" podID="e9642fa4-a57d-443c-be01-f46706cc0368" containerID="d38b6e4e623d207f39e0ad135bfb3e2e049ab9fa794850c982d0edbcbf0c8543" exitCode=0 Jan 05 20:28:40 crc kubenswrapper[4754]: I0105 20:28:40.254476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4c48n" event={"ID":"e9642fa4-a57d-443c-be01-f46706cc0368","Type":"ContainerDied","Data":"d38b6e4e623d207f39e0ad135bfb3e2e049ab9fa794850c982d0edbcbf0c8543"} Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.038342 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.038655 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.094384 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.107798 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.115615 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.115670 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.171767 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.181981 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.297547 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.297608 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.297621 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:42 crc kubenswrapper[4754]: I0105 20:28:42.297633 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.130671 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.280781 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.281249 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="dnsmasq-dns" containerID="cri-o://548abe8522722aed6df97d7456eb0316018f24ad41f61ec974311a701c2b507f" gracePeriod=10 Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.316266 4754 generic.go:334] "Generic (PLEG): container finished" podID="de4b9353-df59-4a87-80f7-26a1d1637032" containerID="9063940e8012108d9397da061fcc85cfc29a032e1410b1101e932fa332d143ff" exitCode=0 Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.317647 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k46bt" event={"ID":"de4b9353-df59-4a87-80f7-26a1d1637032","Type":"ContainerDied","Data":"9063940e8012108d9397da061fcc85cfc29a032e1410b1101e932fa332d143ff"} Jan 05 20:28:43 crc kubenswrapper[4754]: I0105 20:28:43.988163 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Jan 05 20:28:44 crc kubenswrapper[4754]: I0105 20:28:44.329355 4754 generic.go:334] "Generic (PLEG): container finished" podID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerID="548abe8522722aed6df97d7456eb0316018f24ad41f61ec974311a701c2b507f" exitCode=0 Jan 05 20:28:44 crc kubenswrapper[4754]: I0105 20:28:44.329418 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" event={"ID":"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8","Type":"ContainerDied","Data":"548abe8522722aed6df97d7456eb0316018f24ad41f61ec974311a701c2b507f"} Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.348738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k46bt" event={"ID":"de4b9353-df59-4a87-80f7-26a1d1637032","Type":"ContainerDied","Data":"eef3dfece0465947236cc8ddb8d75eb0c832c3fbcd43f0e8e6c8f012e4c0abae"} Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.349239 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef3dfece0465947236cc8ddb8d75eb0c832c3fbcd43f0e8e6c8f012e4c0abae" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.353258 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4c48n" event={"ID":"e9642fa4-a57d-443c-be01-f46706cc0368","Type":"ContainerDied","Data":"4de81bd7a0ef707269898f1085ee5f1d822f0662a21a087bdee1f6051f7b498e"} Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.353905 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4de81bd7a0ef707269898f1085ee5f1d822f0662a21a087bdee1f6051f7b498e" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.354987 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbm2l" event={"ID":"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f","Type":"ContainerDied","Data":"0e92d69428cb28671310f41c3862f7314984887da13bea63c2ac042d692d00f5"} Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.355040 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e92d69428cb28671310f41c3862f7314984887da13bea63c2ac042d692d00f5" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.578474 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4c48n" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.579790 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.580089 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbm2l" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749456 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h59x\" (UniqueName: \"kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749861 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749905 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clh2r\" (UniqueName: \"kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r\") pod \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749950 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs\") pod \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.749978 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle\") pod \"e9642fa4-a57d-443c-be01-f46706cc0368\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750033 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data\") pod \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750122 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvbrd\" (UniqueName: \"kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd\") pod \"e9642fa4-a57d-443c-be01-f46706cc0368\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750181 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750203 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750271 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750389 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data\") pod \"e9642fa4-a57d-443c-be01-f46706cc0368\" (UID: \"e9642fa4-a57d-443c-be01-f46706cc0368\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750423 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle\") pod \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.750512 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts\") pod \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\" (UID: \"32c0a4c6-d974-402e-bdf8-13c8b2e91b3f\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.752597 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs" (OuterVolumeSpecName: "logs") pod "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" (UID: "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.758392 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x" (OuterVolumeSpecName: "kube-api-access-4h59x") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "kube-api-access-4h59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.762702 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.765187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts" (OuterVolumeSpecName: "scripts") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.765302 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.766381 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd" (OuterVolumeSpecName: "kube-api-access-kvbrd") pod "e9642fa4-a57d-443c-be01-f46706cc0368" (UID: "e9642fa4-a57d-443c-be01-f46706cc0368"). InnerVolumeSpecName "kube-api-access-kvbrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.766383 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e9642fa4-a57d-443c-be01-f46706cc0368" (UID: "e9642fa4-a57d-443c-be01-f46706cc0368"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.766363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts" (OuterVolumeSpecName: "scripts") pod "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" (UID: "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.766453 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r" (OuterVolumeSpecName: "kube-api-access-clh2r") pod "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" (UID: "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f"). InnerVolumeSpecName "kube-api-access-clh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.768494 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.797576 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9642fa4-a57d-443c-be01-f46706cc0368" (UID: "e9642fa4-a57d-443c-be01-f46706cc0368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.817788 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.827663 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" (UID: "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.839651 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data" (OuterVolumeSpecName: "config-data") pod "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" (UID: "32c0a4c6-d974-402e-bdf8-13c8b2e91b3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.856240 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data" (OuterVolumeSpecName: "config-data") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.865663 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") pod \"de4b9353-df59-4a87-80f7-26a1d1637032\" (UID: \"de4b9353-df59-4a87-80f7-26a1d1637032\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.869513 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.869758 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: W0105 20:28:45.870676 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/de4b9353-df59-4a87-80f7-26a1d1637032/volumes/kubernetes.io~secret/config-data Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870730 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data" (OuterVolumeSpecName: "config-data") pod "de4b9353-df59-4a87-80f7-26a1d1637032" (UID: "de4b9353-df59-4a87-80f7-26a1d1637032"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870709 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h59x\" (UniqueName: \"kubernetes.io/projected/de4b9353-df59-4a87-80f7-26a1d1637032-kube-api-access-4h59x\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870816 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870834 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870853 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clh2r\" (UniqueName: \"kubernetes.io/projected/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-kube-api-access-clh2r\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870868 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870882 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870896 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870910 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvbrd\" (UniqueName: \"kubernetes.io/projected/e9642fa4-a57d-443c-be01-f46706cc0368-kube-api-access-kvbrd\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870921 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870932 4754 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870943 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4b9353-df59-4a87-80f7-26a1d1637032-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870955 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9642fa4-a57d-443c-be01-f46706cc0368-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870968 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.870980 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.925908 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.925947 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.972784 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.972864 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.973069 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.973131 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8gr\" (UniqueName: \"kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr\") pod \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\" (UID: \"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8\") " Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.974329 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.974471 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:45 crc kubenswrapper[4754]: I0105 20:28:45.977320 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr" (OuterVolumeSpecName: "kube-api-access-dn8gr") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "kube-api-access-dn8gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.029363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.032737 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config" (OuterVolumeSpecName: "config") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.060877 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" (UID: "6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.076658 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.076700 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.076713 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.076725 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn8gr\" (UniqueName: \"kubernetes.io/projected/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8-kube-api-access-dn8gr\") on node \"crc\" DevicePath \"\"" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.326155 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.334666 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.374172 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerStarted","Data":"9150c878eceee0002616ea2ef8e18b6d0417064010d528d3ae38c02b7146a5ec"} Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.379427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" event={"ID":"6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8","Type":"ContainerDied","Data":"ea586fa56e7f18578f3ba984603ea0df8fd9aa92bdfb1216e49c763854f4a3dd"} Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.379463 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k46bt" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.379499 4754 scope.go:117] "RemoveContainer" containerID="548abe8522722aed6df97d7456eb0316018f24ad41f61ec974311a701c2b507f" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.379694 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vhwxz" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.381191 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4c48n" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.382587 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.383336 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbm2l" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.416714 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.493751 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.498511 4754 scope.go:117] "RemoveContainer" containerID="6367ed7894b31c7c6488a1589c52807f03f9b3cfa6b361a7c495c145a4fe94d7" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.573599 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vhwxz"] Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.866220 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-665fb988d9-9wrmb"] Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.866949 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9642fa4-a57d-443c-be01-f46706cc0368" containerName="barbican-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.866971 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9642fa4-a57d-443c-be01-f46706cc0368" containerName="barbican-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867002 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="init" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867009 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="init" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867030 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867037 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867073 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867080 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867097 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="init" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867103 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="init" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867116 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4b9353-df59-4a87-80f7-26a1d1637032" containerName="keystone-bootstrap" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867126 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4b9353-df59-4a87-80f7-26a1d1637032" containerName="keystone-bootstrap" Jan 05 20:28:46 crc kubenswrapper[4754]: E0105 20:28:46.867138 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" containerName="placement-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867145 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" containerName="placement-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867444 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9642fa4-a57d-443c-be01-f46706cc0368" containerName="barbican-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867472 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="696009b0-0456-4c36-bc57-9c5ed0d81184" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867492 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4b9353-df59-4a87-80f7-26a1d1637032" containerName="keystone-bootstrap" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867503 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" containerName="dnsmasq-dns" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.867518 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" containerName="placement-db-sync" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.869067 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.872608 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.872868 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.873052 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tf8x6" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.884033 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ffdc87456-jd4rc"] Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.886776 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.916027 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.916433 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.916579 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4fsqs" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.916707 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.916819 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.923994 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67844b756c-gvtvn"] Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.926082 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.935334 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.935585 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dkn46" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.938042 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.938631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.938694 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.938763 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.953646 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5"] Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.957230 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.974423 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 05 20:28:46 crc kubenswrapper[4754]: I0105 20:28:46.983155 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffdc87456-jd4rc"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.024846 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-combined-ca-bundle\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.026609 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-internal-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027003 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-public-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027048 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgzw\" (UniqueName: \"kubernetes.io/projected/19c37996-a1c0-45b2-9ca0-0af72f831909-kube-api-access-pvgzw\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f3094bb-a30e-433f-b167-9a753260191a-logs\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdrpv\" (UniqueName: \"kubernetes.io/projected/2f3094bb-a30e-433f-b167-9a753260191a-kube-api-access-tdrpv\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-credential-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-config-data\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-combined-ca-bundle\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027399 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data-custom\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027421 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027475 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsgq\" (UniqueName: \"kubernetes.io/projected/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-kube-api-access-pxsgq\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027571 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-fernet-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027613 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-internal-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-scripts\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027715 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-config-data\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027842 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c37996-a1c0-45b2-9ca0-0af72f831909-logs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027870 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-combined-ca-bundle\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.027994 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-scripts\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.028019 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-public-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.043153 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.150507 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-665fb988d9-9wrmb"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.167672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.167796 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-fernet-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.167847 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-internal-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.167984 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-scripts\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168029 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-config-data\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168078 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c37996-a1c0-45b2-9ca0-0af72f831909-logs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168127 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-combined-ca-bundle\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168189 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-scripts\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-public-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-combined-ca-bundle\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168502 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-combined-ca-bundle\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168576 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-internal-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168646 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-public-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgzw\" (UniqueName: \"kubernetes.io/projected/19c37996-a1c0-45b2-9ca0-0af72f831909-kube-api-access-pvgzw\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168733 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data-custom\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168818 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f3094bb-a30e-433f-b167-9a753260191a-logs\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168886 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdrpv\" (UniqueName: \"kubernetes.io/projected/2f3094bb-a30e-433f-b167-9a753260191a-kube-api-access-tdrpv\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168951 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-credential-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.168984 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-config-data\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169037 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qw7\" (UniqueName: \"kubernetes.io/projected/39808c45-50a7-4712-92b4-8e962f2672d1-kube-api-access-h2qw7\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169075 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-combined-ca-bundle\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169143 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data-custom\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169170 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39808c45-50a7-4712-92b4-8e962f2672d1-logs\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.169239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsgq\" (UniqueName: \"kubernetes.io/projected/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-kube-api-access-pxsgq\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.194181 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f3094bb-a30e-433f-b167-9a753260191a-logs\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.197797 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-fernet-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.214133 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c37996-a1c0-45b2-9ca0-0af72f831909-logs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.219611 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data-custom\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.220066 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-config-data\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.222543 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-combined-ca-bundle\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.223698 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-config-data\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.223956 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-internal-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.224765 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-public-tls-certs\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.226907 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-config-data\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.227028 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-credential-keys\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.228636 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-public-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.229016 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f3094bb-a30e-433f-b167-9a753260191a-combined-ca-bundle\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.232442 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsgq\" (UniqueName: \"kubernetes.io/projected/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-kube-api-access-pxsgq\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.263885 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-internal-tls-certs\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.264821 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgzw\" (UniqueName: \"kubernetes.io/projected/19c37996-a1c0-45b2-9ca0-0af72f831909-kube-api-access-pvgzw\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.264959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d0d587-cbf6-4c67-9d08-297250f6c5e5-scripts\") pod \"keystone-67844b756c-gvtvn\" (UID: \"b8d0d587-cbf6-4c67-9d08-297250f6c5e5\") " pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.265422 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-combined-ca-bundle\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.265652 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c37996-a1c0-45b2-9ca0-0af72f831909-scripts\") pod \"placement-7ffdc87456-jd4rc\" (UID: \"19c37996-a1c0-45b2-9ca0-0af72f831909\") " pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.268733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.269793 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.284060 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-combined-ca-bundle\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.284205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data-custom\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.284373 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qw7\" (UniqueName: \"kubernetes.io/projected/39808c45-50a7-4712-92b4-8e962f2672d1-kube-api-access-h2qw7\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.284451 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39808c45-50a7-4712-92b4-8e962f2672d1-logs\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.284504 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.285588 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdrpv\" (UniqueName: \"kubernetes.io/projected/2f3094bb-a30e-433f-b167-9a753260191a-kube-api-access-tdrpv\") pod \"barbican-worker-665fb988d9-9wrmb\" (UID: \"2f3094bb-a30e-433f-b167-9a753260191a\") " pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.287912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39808c45-50a7-4712-92b4-8e962f2672d1-logs\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.301134 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data-custom\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.309475 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-combined-ca-bundle\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.327509 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67844b756c-gvtvn"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.345842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qw7\" (UniqueName: \"kubernetes.io/projected/39808c45-50a7-4712-92b4-8e962f2672d1-kube-api-access-h2qw7\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.362501 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39808c45-50a7-4712-92b4-8e962f2672d1-config-data\") pod \"barbican-keystone-listener-9ddd6f8d6-rrvh5\" (UID: \"39808c45-50a7-4712-92b4-8e962f2672d1\") " pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.379368 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.383465 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.397669 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.398314 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltjp\" (UniqueName: \"kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.398610 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.399035 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.399379 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.399521 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.491601 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-665fb988d9-9wrmb" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.501352 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502038 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502098 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502191 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltjp\" (UniqueName: \"kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.502365 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.503390 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.504067 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.504250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.504829 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.506499 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.541337 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltjp\" (UniqueName: \"kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp\") pod \"dnsmasq-dns-688c87cc99-v7ttl\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.566415 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.569578 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.579358 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.600741 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.601080 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.641340 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8" path="/var/lib/kubelet/pods/6a2d4eb0-d05b-4443-aaa9-2a3c4d3619f8/volumes" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.709065 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.709104 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.709268 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmqx\" (UniqueName: \"kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.709306 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.709347 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.715360 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.811767 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmqx\" (UniqueName: \"kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.812246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.812314 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.812429 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.812462 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.813121 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.938464 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.946418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.946926 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:47 crc kubenswrapper[4754]: I0105 20:28:47.951637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmqx\" (UniqueName: \"kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx\") pod \"barbican-api-69c489cbf4-npzvh\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:48 crc kubenswrapper[4754]: I0105 20:28:48.249719 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:48 crc kubenswrapper[4754]: I0105 20:28:48.534246 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67844b756c-gvtvn"] Jan 05 20:28:48 crc kubenswrapper[4754]: W0105 20:28:48.567743 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8d0d587_cbf6_4c67_9d08_297250f6c5e5.slice/crio-52838c58401635b82e8aa9525258d56a992ce0be0fc7336f552ae48e777a468e WatchSource:0}: Error finding container 52838c58401635b82e8aa9525258d56a992ce0be0fc7336f552ae48e777a468e: Status 404 returned error can't find the container with id 52838c58401635b82e8aa9525258d56a992ce0be0fc7336f552ae48e777a468e Jan 05 20:28:48 crc kubenswrapper[4754]: I0105 20:28:48.988104 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.042384 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-665fb988d9-9wrmb"] Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.058654 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5"] Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.077777 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffdc87456-jd4rc"] Jan 05 20:28:49 crc kubenswrapper[4754]: W0105 20:28:49.257302 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a760b6b_c03a_450c_a624_a918f6f28a8c.slice/crio-a79101fa067c51c3840b0d804254daa4635c81cebfe58addb07e6b0eeb25e286 WatchSource:0}: Error finding container a79101fa067c51c3840b0d804254daa4635c81cebfe58addb07e6b0eeb25e286: Status 404 returned error can't find the container with id a79101fa067c51c3840b0d804254daa4635c81cebfe58addb07e6b0eeb25e286 Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.293127 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.643387 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" event={"ID":"39808c45-50a7-4712-92b4-8e962f2672d1","Type":"ContainerStarted","Data":"080924daf0d74f821cf05ad340fb9882e23519def6cf6a24a51d9d0e026420fa"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.670232 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665fb988d9-9wrmb" event={"ID":"2f3094bb-a30e-433f-b167-9a753260191a","Type":"ContainerStarted","Data":"4e7372b53b4a8405693619cdaf467e1f0532d5471b2ce4d7bbbf9b75fea0f365"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.707492 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerStarted","Data":"a79101fa067c51c3840b0d804254daa4635c81cebfe58addb07e6b0eeb25e286"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.730255 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" event={"ID":"8c33e0a9-6682-4462-bb72-e4d257a09450","Type":"ContainerStarted","Data":"8c2e8426f02b8cf8fcd5dd463caf0ade6ad6ff8677ae4d5587cc46d971e46d7c"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.740225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffdc87456-jd4rc" event={"ID":"19c37996-a1c0-45b2-9ca0-0af72f831909","Type":"ContainerStarted","Data":"2dd2d2d8c21c893b6d00f436e4f96458af81893f69d6e9758d86b136faea2e4d"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.749499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67844b756c-gvtvn" event={"ID":"b8d0d587-cbf6-4c67-9d08-297250f6c5e5","Type":"ContainerStarted","Data":"ac677968dec4e9d4bafd40a1e2f16eab27f4f9a6b675d903e88369aa8ed3b447"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.749565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67844b756c-gvtvn" event={"ID":"b8d0d587-cbf6-4c67-9d08-297250f6c5e5","Type":"ContainerStarted","Data":"52838c58401635b82e8aa9525258d56a992ce0be0fc7336f552ae48e777a468e"} Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.750490 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:28:49 crc kubenswrapper[4754]: I0105 20:28:49.793108 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67844b756c-gvtvn" podStartSLOduration=3.793080022 podStartE2EDuration="3.793080022s" podCreationTimestamp="2026-01-05 20:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:49.782615488 +0000 UTC m=+1416.491799362" watchObservedRunningTime="2026-01-05 20:28:49.793080022 +0000 UTC m=+1416.502263896" Jan 05 20:28:50 crc kubenswrapper[4754]: I0105 20:28:50.777988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerStarted","Data":"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1"} Jan 05 20:28:50 crc kubenswrapper[4754]: I0105 20:28:50.794973 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerID="1b86205aaa484de6ee83c4a1f114950826b27f79ddec447bf4c6712597dd319d" exitCode=0 Jan 05 20:28:50 crc kubenswrapper[4754]: I0105 20:28:50.795358 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" event={"ID":"8c33e0a9-6682-4462-bb72-e4d257a09450","Type":"ContainerDied","Data":"1b86205aaa484de6ee83c4a1f114950826b27f79ddec447bf4c6712597dd319d"} Jan 05 20:28:50 crc kubenswrapper[4754]: I0105 20:28:50.816269 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffdc87456-jd4rc" event={"ID":"19c37996-a1c0-45b2-9ca0-0af72f831909","Type":"ContainerStarted","Data":"a8e5b36a9d18237e511cb3ff8b5c8462c4c96e59d348e7a96facfbd0e9af0cfa"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.032265 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cbd467bc4-dt25q"] Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.038421 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.042776 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.043492 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.088693 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbd467bc4-dt25q"] Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179535 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179626 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2516e3b9-fbb3-4341-ba30-837bc79225aa-logs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179677 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-public-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-combined-ca-bundle\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179820 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqh5m\" (UniqueName: \"kubernetes.io/projected/2516e3b9-fbb3-4341-ba30-837bc79225aa-kube-api-access-qqh5m\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179888 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data-custom\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.179967 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-internal-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.282837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-combined-ca-bundle\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.282925 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqh5m\" (UniqueName: \"kubernetes.io/projected/2516e3b9-fbb3-4341-ba30-837bc79225aa-kube-api-access-qqh5m\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.283010 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data-custom\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.283082 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-internal-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.283195 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.283236 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2516e3b9-fbb3-4341-ba30-837bc79225aa-logs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.283272 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-public-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.284589 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2516e3b9-fbb3-4341-ba30-837bc79225aa-logs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.292033 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-internal-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.296250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-combined-ca-bundle\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.298477 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-public-tls-certs\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.299853 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data-custom\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.301236 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2516e3b9-fbb3-4341-ba30-837bc79225aa-config-data\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.302738 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqh5m\" (UniqueName: \"kubernetes.io/projected/2516e3b9-fbb3-4341-ba30-837bc79225aa-kube-api-access-qqh5m\") pod \"barbican-api-7cbd467bc4-dt25q\" (UID: \"2516e3b9-fbb3-4341-ba30-837bc79225aa\") " pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.432645 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.839599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffdc87456-jd4rc" event={"ID":"19c37996-a1c0-45b2-9ca0-0af72f831909","Type":"ContainerStarted","Data":"97d4fbad2e2076c8103452b07fbcca4b8f48e302e03f580b4af53fce5c541c23"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.839748 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.844881 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p84n5" event={"ID":"c19555bb-f08e-4ff7-a6d3-26615858d3f3","Type":"ContainerStarted","Data":"51ccfeb802b35dbc949d2d9d994ce5e07e4400bfcd8904b5a9a5bd5a712601d2"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.848484 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vpqrq" event={"ID":"3521f408-9c1b-440b-b7c1-fdc7058f9eb3","Type":"ContainerStarted","Data":"40f0c71f38136a569a074d31131ae837769b6614ed31098d2001a5fa7a9aea06"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.868658 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerStarted","Data":"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.868742 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.868839 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.871088 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ffdc87456-jd4rc" podStartSLOduration=5.871067328 podStartE2EDuration="5.871067328s" podCreationTimestamp="2026-01-05 20:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:51.861716553 +0000 UTC m=+1418.570900427" watchObservedRunningTime="2026-01-05 20:28:51.871067328 +0000 UTC m=+1418.580251202" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.880022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" event={"ID":"8c33e0a9-6682-4462-bb72-e4d257a09450","Type":"ContainerStarted","Data":"8ddadd8434f720ca960ece5aadd7630b23d833289603abb05539b3588a004367"} Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.881038 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.911942 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-vpqrq" podStartSLOduration=4.205136385 podStartE2EDuration="56.911924138s" podCreationTimestamp="2026-01-05 20:27:55 +0000 UTC" firstStartedPulling="2026-01-05 20:27:57.676083027 +0000 UTC m=+1364.385266901" lastFinishedPulling="2026-01-05 20:28:50.38287077 +0000 UTC m=+1417.092054654" observedRunningTime="2026-01-05 20:28:51.882665592 +0000 UTC m=+1418.591849466" watchObservedRunningTime="2026-01-05 20:28:51.911924138 +0000 UTC m=+1418.621108012" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.922645 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-p84n5" podStartSLOduration=4.718090213 podStartE2EDuration="55.922575017s" podCreationTimestamp="2026-01-05 20:27:56 +0000 UTC" firstStartedPulling="2026-01-05 20:27:57.971390732 +0000 UTC m=+1364.680574606" lastFinishedPulling="2026-01-05 20:28:49.175875536 +0000 UTC m=+1415.885059410" observedRunningTime="2026-01-05 20:28:51.899503503 +0000 UTC m=+1418.608687377" watchObservedRunningTime="2026-01-05 20:28:51.922575017 +0000 UTC m=+1418.631758891" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.963622 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" podStartSLOduration=4.963594372 podStartE2EDuration="4.963594372s" podCreationTimestamp="2026-01-05 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:51.920564185 +0000 UTC m=+1418.629748059" watchObservedRunningTime="2026-01-05 20:28:51.963594372 +0000 UTC m=+1418.672778246" Jan 05 20:28:51 crc kubenswrapper[4754]: I0105 20:28:51.979201 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69c489cbf4-npzvh" podStartSLOduration=4.9791811 podStartE2EDuration="4.9791811s" podCreationTimestamp="2026-01-05 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:51.937421856 +0000 UTC m=+1418.646605730" watchObservedRunningTime="2026-01-05 20:28:51.9791811 +0000 UTC m=+1418.688364974" Jan 05 20:28:52 crc kubenswrapper[4754]: I0105 20:28:52.904435 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.418091 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbd467bc4-dt25q"] Jan 05 20:28:53 crc kubenswrapper[4754]: W0105 20:28:53.420502 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2516e3b9_fbb3_4341_ba30_837bc79225aa.slice/crio-ea48d0eed9d9e4b28ddbdaaddcbeccf405e9235994bd8965e0a75cada5fb4673 WatchSource:0}: Error finding container ea48d0eed9d9e4b28ddbdaaddcbeccf405e9235994bd8965e0a75cada5fb4673: Status 404 returned error can't find the container with id ea48d0eed9d9e4b28ddbdaaddcbeccf405e9235994bd8965e0a75cada5fb4673 Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.922309 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbd467bc4-dt25q" event={"ID":"2516e3b9-fbb3-4341-ba30-837bc79225aa","Type":"ContainerStarted","Data":"df896ed3450940b099297c2dc00fa6dbbbc605baed415bff76ca4c5c06448398"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.922741 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbd467bc4-dt25q" event={"ID":"2516e3b9-fbb3-4341-ba30-837bc79225aa","Type":"ContainerStarted","Data":"ea48d0eed9d9e4b28ddbdaaddcbeccf405e9235994bd8965e0a75cada5fb4673"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.930267 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" event={"ID":"39808c45-50a7-4712-92b4-8e962f2672d1","Type":"ContainerStarted","Data":"07e000330cdaa5363203e0ef46eda486d314748ab98a74483b09c1f098995255"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.930373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" event={"ID":"39808c45-50a7-4712-92b4-8e962f2672d1","Type":"ContainerStarted","Data":"ea4e71a03c252736273769d8239835f047982edfd2bd93b86aeea3e0cba17bba"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.937668 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665fb988d9-9wrmb" event={"ID":"2f3094bb-a30e-433f-b167-9a753260191a","Type":"ContainerStarted","Data":"a097489939c5b4c3f48d050debde871f7861285ea7d0ef75bd33fda12463527d"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.937734 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665fb988d9-9wrmb" event={"ID":"2f3094bb-a30e-433f-b167-9a753260191a","Type":"ContainerStarted","Data":"fbf97fbb798fd64b0de1769b5a399f5d384102f2abb985734889750d4b4929da"} Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.958046 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-9ddd6f8d6-rrvh5" podStartSLOduration=4.249798754 podStartE2EDuration="7.958024808s" podCreationTimestamp="2026-01-05 20:28:46 +0000 UTC" firstStartedPulling="2026-01-05 20:28:49.178204257 +0000 UTC m=+1415.887388131" lastFinishedPulling="2026-01-05 20:28:52.886430311 +0000 UTC m=+1419.595614185" observedRunningTime="2026-01-05 20:28:53.951182359 +0000 UTC m=+1420.660366233" watchObservedRunningTime="2026-01-05 20:28:53.958024808 +0000 UTC m=+1420.667208682" Jan 05 20:28:53 crc kubenswrapper[4754]: I0105 20:28:53.998227 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-665fb988d9-9wrmb" podStartSLOduration=4.278978699 podStartE2EDuration="7.998205891s" podCreationTimestamp="2026-01-05 20:28:46 +0000 UTC" firstStartedPulling="2026-01-05 20:28:49.166882031 +0000 UTC m=+1415.876065905" lastFinishedPulling="2026-01-05 20:28:52.886109223 +0000 UTC m=+1419.595293097" observedRunningTime="2026-01-05 20:28:53.986793722 +0000 UTC m=+1420.695977596" watchObservedRunningTime="2026-01-05 20:28:53.998205891 +0000 UTC m=+1420.707389765" Jan 05 20:28:54 crc kubenswrapper[4754]: I0105 20:28:54.978522 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbd467bc4-dt25q" event={"ID":"2516e3b9-fbb3-4341-ba30-837bc79225aa","Type":"ContainerStarted","Data":"4a8e62e8e3bc0c2c6984a6ffe5b2ea93f0cdf8b6fde84221a21e2be6355b2319"} Jan 05 20:28:54 crc kubenswrapper[4754]: I0105 20:28:54.979727 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:54 crc kubenswrapper[4754]: I0105 20:28:54.979763 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:28:55 crc kubenswrapper[4754]: I0105 20:28:55.037910 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cbd467bc4-dt25q" podStartSLOduration=5.037875652 podStartE2EDuration="5.037875652s" podCreationTimestamp="2026-01-05 20:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:28:55.008277766 +0000 UTC m=+1421.717461640" watchObservedRunningTime="2026-01-05 20:28:55.037875652 +0000 UTC m=+1421.747059526" Jan 05 20:28:57 crc kubenswrapper[4754]: I0105 20:28:57.717620 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:28:57 crc kubenswrapper[4754]: I0105 20:28:57.793712 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:28:57 crc kubenswrapper[4754]: I0105 20:28:57.794042 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="dnsmasq-dns" containerID="cri-o://13dc15e223e55fee6944a7c78e56e5be146b36d9556913d5ec9a13b65fb81849" gracePeriod=10 Jan 05 20:28:58 crc kubenswrapper[4754]: I0105 20:28:58.061558 4754 generic.go:334] "Generic (PLEG): container finished" podID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerID="13dc15e223e55fee6944a7c78e56e5be146b36d9556913d5ec9a13b65fb81849" exitCode=0 Jan 05 20:28:58 crc kubenswrapper[4754]: I0105 20:28:58.061659 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" event={"ID":"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e","Type":"ContainerDied","Data":"13dc15e223e55fee6944a7c78e56e5be146b36d9556913d5ec9a13b65fb81849"} Jan 05 20:28:58 crc kubenswrapper[4754]: I0105 20:28:58.075783 4754 generic.go:334] "Generic (PLEG): container finished" podID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" containerID="40f0c71f38136a569a074d31131ae837769b6614ed31098d2001a5fa7a9aea06" exitCode=0 Jan 05 20:28:58 crc kubenswrapper[4754]: I0105 20:28:58.075824 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vpqrq" event={"ID":"3521f408-9c1b-440b-b7c1-fdc7058f9eb3","Type":"ContainerDied","Data":"40f0c71f38136a569a074d31131ae837769b6614ed31098d2001a5fa7a9aea06"} Jan 05 20:28:58 crc kubenswrapper[4754]: I0105 20:28:58.128323 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: connect: connection refused" Jan 05 20:29:00 crc kubenswrapper[4754]: I0105 20:29:00.107474 4754 generic.go:334] "Generic (PLEG): container finished" podID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" containerID="51ccfeb802b35dbc949d2d9d994ce5e07e4400bfcd8904b5a9a5bd5a712601d2" exitCode=0 Jan 05 20:29:00 crc kubenswrapper[4754]: I0105 20:29:00.107590 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p84n5" event={"ID":"c19555bb-f08e-4ff7-a6d3-26615858d3f3","Type":"ContainerDied","Data":"51ccfeb802b35dbc949d2d9d994ce5e07e4400bfcd8904b5a9a5bd5a712601d2"} Jan 05 20:29:00 crc kubenswrapper[4754]: I0105 20:29:00.152132 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:29:00 crc kubenswrapper[4754]: I0105 20:29:00.186728 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.166305 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p84n5" event={"ID":"c19555bb-f08e-4ff7-a6d3-26615858d3f3","Type":"ContainerDied","Data":"a960a1483a9f14c3dfcbf39837324c9d685b79a098ad9b37d907f916c78902fe"} Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.167099 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a960a1483a9f14c3dfcbf39837324c9d685b79a098ad9b37d907f916c78902fe" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.191235 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vpqrq" event={"ID":"3521f408-9c1b-440b-b7c1-fdc7058f9eb3","Type":"ContainerDied","Data":"55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1"} Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.191283 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ec97cef70a75619b4f69ebccd1a66cf38981d41d5d7119e5bf09cd4f27f7d1" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.213126 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vpqrq" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.248377 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p84n5" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.319381 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle\") pod \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.319506 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data\") pod \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.319682 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp24s\" (UniqueName: \"kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s\") pod \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\" (UID: \"3521f408-9c1b-440b-b7c1-fdc7058f9eb3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.349156 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s" (OuterVolumeSpecName: "kube-api-access-hp24s") pod "3521f408-9c1b-440b-b7c1-fdc7058f9eb3" (UID: "3521f408-9c1b-440b-b7c1-fdc7058f9eb3"). InnerVolumeSpecName "kube-api-access-hp24s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.374409 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.421986 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3521f408-9c1b-440b-b7c1-fdc7058f9eb3" (UID: "3521f408-9c1b-440b-b7c1-fdc7058f9eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.427670 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.427819 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.428168 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.428513 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.428639 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stlb6\" (UniqueName: \"kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.428737 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle\") pod \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\" (UID: \"c19555bb-f08e-4ff7-a6d3-26615858d3f3\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.429245 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.429925 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp24s\" (UniqueName: \"kubernetes.io/projected/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-kube-api-access-hp24s\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.430585 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.435084 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts" (OuterVolumeSpecName: "scripts") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.439602 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.440874 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6" (OuterVolumeSpecName: "kube-api-access-stlb6") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "kube-api-access-stlb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: E0105 20:29:02.446315 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.485717 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.511219 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data" (OuterVolumeSpecName: "config-data") pod "c19555bb-f08e-4ff7-a6d3-26615858d3f3" (UID: "c19555bb-f08e-4ff7-a6d3-26615858d3f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.530974 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531103 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531267 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531284 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8mt6\" (UniqueName: \"kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531374 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0\") pod \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\" (UID: \"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e\") " Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.531821 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.532201 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.532217 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.532227 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19555bb-f08e-4ff7-a6d3-26615858d3f3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.532236 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stlb6\" (UniqueName: \"kubernetes.io/projected/c19555bb-f08e-4ff7-a6d3-26615858d3f3-kube-api-access-stlb6\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.532245 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19555bb-f08e-4ff7-a6d3-26615858d3f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.543439 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6" (OuterVolumeSpecName: "kube-api-access-p8mt6") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "kube-api-access-p8mt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.550392 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data" (OuterVolumeSpecName: "config-data") pod "3521f408-9c1b-440b-b7c1-fdc7058f9eb3" (UID: "3521f408-9c1b-440b-b7c1-fdc7058f9eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.599299 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.622320 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.623036 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.630145 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637771 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637818 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637831 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8mt6\" (UniqueName: \"kubernetes.io/projected/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-kube-api-access-p8mt6\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637845 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637905 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.637924 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3521f408-9c1b-440b-b7c1-fdc7058f9eb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.639308 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config" (OuterVolumeSpecName: "config") pod "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" (UID: "69a9c7cb-a8f4-4099-a4b4-519cfca0a62e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.756057 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:02 crc kubenswrapper[4754]: I0105 20:29:02.983789 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.205611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerStarted","Data":"dea3b16d5f1fb2450d7f47e6642bfba65e0758c7033ed6e5c03e0c0359ac7bd7"} Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.206043 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="ceilometer-notification-agent" containerID="cri-o://c077c60356d4c8453a24a113c03002f3ce4ea7be083f9569f55ee21f68e27fff" gracePeriod=30 Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.206391 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.206674 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="proxy-httpd" containerID="cri-o://dea3b16d5f1fb2450d7f47e6642bfba65e0758c7033ed6e5c03e0c0359ac7bd7" gracePeriod=30 Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.206720 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="sg-core" containerID="cri-o://9150c878eceee0002616ea2ef8e18b6d0417064010d528d3ae38c02b7146a5ec" gracePeriod=30 Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.216419 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.216683 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p84n5" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.216705 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-wkf6w" event={"ID":"69a9c7cb-a8f4-4099-a4b4-519cfca0a62e","Type":"ContainerDied","Data":"fe13a2e7f38e65930426e0f328136f5e2179323d81aaa6e7791de8bef88aa713"} Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.216735 4754 scope.go:117] "RemoveContainer" containerID="13dc15e223e55fee6944a7c78e56e5be146b36d9556913d5ec9a13b65fb81849" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.217050 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vpqrq" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.269964 4754 scope.go:117] "RemoveContainer" containerID="2b0e827bee21c87ad5e40036f0fd948987a6b1282d54bfa02de73460e271d81d" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.343481 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.350771 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.364690 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-wkf6w"] Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.649530 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" path="/var/lib/kubelet/pods/69a9c7cb-a8f4-4099-a4b4-519cfca0a62e/volumes" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.653868 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:03 crc kubenswrapper[4754]: E0105 20:29:03.654659 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="dnsmasq-dns" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.654676 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="dnsmasq-dns" Jan 05 20:29:03 crc kubenswrapper[4754]: E0105 20:29:03.654693 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" containerName="heat-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.654700 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" containerName="heat-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: E0105 20:29:03.654720 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" containerName="cinder-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.654727 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" containerName="cinder-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: E0105 20:29:03.654745 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="init" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.654753 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="init" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.655024 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" containerName="cinder-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.655053 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a9c7cb-a8f4-4099-a4b4-519cfca0a62e" containerName="dnsmasq-dns" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.655071 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" containerName="heat-db-sync" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.656862 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.683508 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.684590 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x49xt" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.684710 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.684827 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.805473 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.805533 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.805634 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vc8g\" (UniqueName: \"kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.805666 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.808908 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.808950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.808736 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.913704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.913762 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.913810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.913976 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.914010 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.914077 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vc8g\" (UniqueName: \"kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.915079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.938467 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.953221 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.957894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.964003 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:03 crc kubenswrapper[4754]: I0105 20:29:03.970763 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vc8g\" (UniqueName: \"kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.006803 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.007002 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.044770 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.128198 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.334674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.335247 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.335641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.335676 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.335877 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l72x\" (UniqueName: \"kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.349873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.402236 4754 generic.go:334] "Generic (PLEG): container finished" podID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerID="dea3b16d5f1fb2450d7f47e6642bfba65e0758c7033ed6e5c03e0c0359ac7bd7" exitCode=0 Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.402306 4754 generic.go:334] "Generic (PLEG): container finished" podID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerID="9150c878eceee0002616ea2ef8e18b6d0417064010d528d3ae38c02b7146a5ec" exitCode=2 Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.402379 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerDied","Data":"dea3b16d5f1fb2450d7f47e6642bfba65e0758c7033ed6e5c03e0c0359ac7bd7"} Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.402422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerDied","Data":"9150c878eceee0002616ea2ef8e18b6d0417064010d528d3ae38c02b7146a5ec"} Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469248 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l72x\" (UniqueName: \"kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469841 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469926 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.469959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.472157 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.472742 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.473263 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.474273 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.474827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.488976 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.500337 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.523919 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.548268 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.549909 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l72x\" (UniqueName: \"kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x\") pod \"dnsmasq-dns-6bb4fc677f-wwjkm\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.682992 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.695420 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.696925 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.704158 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.704439 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.704628 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zm7\" (UniqueName: \"kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.704770 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.704938 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811070 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811258 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811392 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9zm7\" (UniqueName: \"kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.811427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.816427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.826831 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.826915 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.830898 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.832578 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.836586 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.864799 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9zm7\" (UniqueName: \"kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7\") pod \"cinder-api-0\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " pod="openstack/cinder-api-0" Jan 05 20:29:04 crc kubenswrapper[4754]: I0105 20:29:04.873333 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.159206 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.166250 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbd467bc4-dt25q" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.430417 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.432064 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69c489cbf4-npzvh" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" containerID="cri-o://bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1" gracePeriod=30 Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.432095 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69c489cbf4-npzvh" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api" containerID="cri-o://467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e" gracePeriod=30 Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.451725 4754 generic.go:334] "Generic (PLEG): container finished" podID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerID="c077c60356d4c8453a24a113c03002f3ce4ea7be083f9569f55ee21f68e27fff" exitCode=0 Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.451828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerDied","Data":"c077c60356d4c8453a24a113c03002f3ce4ea7be083f9569f55ee21f68e27fff"} Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.454856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerStarted","Data":"3a91d992ba23b4ce6be7fb70bc79976355b25f59560e2b5d32591b2b539831c5"} Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.470098 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-69c489cbf4-npzvh" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": EOF" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.531264 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77dc856cfc-xjgrw" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.681897 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.682846 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7656f8454b-v7vj9" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-api" containerID="cri-o://3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515" gracePeriod=30 Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.683254 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7656f8454b-v7vj9" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-httpd" containerID="cri-o://5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca" gracePeriod=30 Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.851983 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.969450 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.969936 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970035 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970082 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970111 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgt4t\" (UniqueName: \"kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970165 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970268 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data\") pod \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\" (UID: \"e4349961-956a-4a53-a1a5-11fcffcbd0f7\") " Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.970492 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.971139 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.974858 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.983938 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t" (OuterVolumeSpecName: "kube-api-access-hgt4t") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "kube-api-access-hgt4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:05 crc kubenswrapper[4754]: I0105 20:29:05.986919 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts" (OuterVolumeSpecName: "scripts") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.021155 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.082540 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.082564 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4349961-956a-4a53-a1a5-11fcffcbd0f7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.082573 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.082582 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgt4t\" (UniqueName: \"kubernetes.io/projected/e4349961-956a-4a53-a1a5-11fcffcbd0f7-kube-api-access-hgt4t\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.185452 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data" (OuterVolumeSpecName: "config-data") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.212840 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4349961-956a-4a53-a1a5-11fcffcbd0f7" (UID: "e4349961-956a-4a53-a1a5-11fcffcbd0f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.218064 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.285454 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.285958 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4349961-956a-4a53-a1a5-11fcffcbd0f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.375744 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:06 crc kubenswrapper[4754]: W0105 20:29:06.446543 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b0677b2_c5d5_4a04_9f26_52aa89506809.slice/crio-dda609464664dcc76b2df134bf99d6e3da8d564f47731fd5f2e43da89a3f1981 WatchSource:0}: Error finding container dda609464664dcc76b2df134bf99d6e3da8d564f47731fd5f2e43da89a3f1981: Status 404 returned error can't find the container with id dda609464664dcc76b2df134bf99d6e3da8d564f47731fd5f2e43da89a3f1981 Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.507814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" event={"ID":"7b0677b2-c5d5-4a04-9f26-52aa89506809","Type":"ContainerStarted","Data":"dda609464664dcc76b2df134bf99d6e3da8d564f47731fd5f2e43da89a3f1981"} Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.604406 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.610534 4754 generic.go:334] "Generic (PLEG): container finished" podID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerID="5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca" exitCode=0 Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.610646 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerDied","Data":"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca"} Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.642428 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4349961-956a-4a53-a1a5-11fcffcbd0f7","Type":"ContainerDied","Data":"f163ff263ca383ebdc86f6df5786374aa3cb58f7ea900bd6537cbb547ab7a088"} Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.642852 4754 scope.go:117] "RemoveContainer" containerID="dea3b16d5f1fb2450d7f47e6642bfba65e0758c7033ed6e5c03e0c0359ac7bd7" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.643031 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.668645 4754 generic.go:334] "Generic (PLEG): container finished" podID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerID="bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1" exitCode=143 Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.668798 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerDied","Data":"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1"} Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.804728 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.840388 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.862879 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: E0105 20:29:06.864070 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="proxy-httpd" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.864128 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="proxy-httpd" Jan 05 20:29:06 crc kubenswrapper[4754]: E0105 20:29:06.864158 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="sg-core" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.864165 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="sg-core" Jan 05 20:29:06 crc kubenswrapper[4754]: E0105 20:29:06.864194 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="ceilometer-notification-agent" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.864200 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="ceilometer-notification-agent" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.865566 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="ceilometer-notification-agent" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.865605 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="proxy-httpd" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.865641 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" containerName="sg-core" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.882467 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.888133 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.888436 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.893201 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.898941 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899009 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899069 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899128 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899159 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.899242 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjwm\" (UniqueName: \"kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:06 crc kubenswrapper[4754]: I0105 20:29:06.988617 4754 scope.go:117] "RemoveContainer" containerID="9150c878eceee0002616ea2ef8e18b6d0417064010d528d3ae38c02b7146a5ec" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjwm\" (UniqueName: \"kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003479 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003498 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003598 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003644 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.003674 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.012784 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.013042 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.017091 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.046949 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.052541 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.065798 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjwm\" (UniqueName: \"kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.066212 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.225820 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.227094 4754 scope.go:117] "RemoveContainer" containerID="c077c60356d4c8453a24a113c03002f3ce4ea7be083f9569f55ee21f68e27fff" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.645571 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4349961-956a-4a53-a1a5-11fcffcbd0f7" path="/var/lib/kubelet/pods/e4349961-956a-4a53-a1a5-11fcffcbd0f7/volumes" Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.718720 4754 generic.go:334] "Generic (PLEG): container finished" podID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerID="a9feb030b3639beeaa4dc42d0f8da3f0baf135bd2be117acdb1421fe661b2bf2" exitCode=0 Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.718877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" event={"ID":"7b0677b2-c5d5-4a04-9f26-52aa89506809","Type":"ContainerDied","Data":"a9feb030b3639beeaa4dc42d0f8da3f0baf135bd2be117acdb1421fe661b2bf2"} Jan 05 20:29:07 crc kubenswrapper[4754]: I0105 20:29:07.723100 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerStarted","Data":"b62355cf46c106b9973407b2a0b493c17b1cca579ce2fa6e6588061bad3ead81"} Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.105808 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:08 crc kubenswrapper[4754]: W0105 20:29:08.110149 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84bba751_7ae7_4f46_9673_fb30b2ba2496.slice/crio-2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a WatchSource:0}: Error finding container 2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a: Status 404 returned error can't find the container with id 2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.749216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerStarted","Data":"2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a"} Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.760997 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" event={"ID":"7b0677b2-c5d5-4a04-9f26-52aa89506809","Type":"ContainerStarted","Data":"6387ce543464fc9a38093c2bfa92d61efb626b513cd874b141d1b49161e82863"} Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.761430 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.771723 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerStarted","Data":"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4"} Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.791234 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" podStartSLOduration=5.7912121039999995 podStartE2EDuration="5.791212104s" podCreationTimestamp="2026-01-05 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:08.778984573 +0000 UTC m=+1435.488168447" watchObservedRunningTime="2026-01-05 20:29:08.791212104 +0000 UTC m=+1435.500395978" Jan 05 20:29:08 crc kubenswrapper[4754]: I0105 20:29:08.796618 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerStarted","Data":"d6b389a035b18c412981cbbb856938ae8ba46f708d5d96509e2f60ed6f743d23"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.707320 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.720709 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs\") pod \"d754693c-f3ef-4b36-b827-f88a11c1b76a\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.720799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config\") pod \"d754693c-f3ef-4b36-b827-f88a11c1b76a\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.720851 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpqbq\" (UniqueName: \"kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq\") pod \"d754693c-f3ef-4b36-b827-f88a11c1b76a\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.720882 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config\") pod \"d754693c-f3ef-4b36-b827-f88a11c1b76a\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.720924 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle\") pod \"d754693c-f3ef-4b36-b827-f88a11c1b76a\" (UID: \"d754693c-f3ef-4b36-b827-f88a11c1b76a\") " Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.729785 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq" (OuterVolumeSpecName: "kube-api-access-rpqbq") pod "d754693c-f3ef-4b36-b827-f88a11c1b76a" (UID: "d754693c-f3ef-4b36-b827-f88a11c1b76a"). InnerVolumeSpecName "kube-api-access-rpqbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.741817 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d754693c-f3ef-4b36-b827-f88a11c1b76a" (UID: "d754693c-f3ef-4b36-b827-f88a11c1b76a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.817024 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d754693c-f3ef-4b36-b827-f88a11c1b76a" (UID: "d754693c-f3ef-4b36-b827-f88a11c1b76a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.823360 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.823390 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpqbq\" (UniqueName: \"kubernetes.io/projected/d754693c-f3ef-4b36-b827-f88a11c1b76a-kube-api-access-rpqbq\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.823403 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.827861 4754 generic.go:334] "Generic (PLEG): container finished" podID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerID="3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515" exitCode=0 Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.828002 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerDied","Data":"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.828084 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7656f8454b-v7vj9" event={"ID":"d754693c-f3ef-4b36-b827-f88a11c1b76a","Type":"ContainerDied","Data":"daae67a095df97118b314bf757e5e6c0395694dec3e2edf02e902c90058c068a"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.828149 4754 scope.go:117] "RemoveContainer" containerID="5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.828336 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7656f8454b-v7vj9" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.849225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerStarted","Data":"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.849326 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api-log" containerID="cri-o://75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" gracePeriod=30 Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.849378 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api" containerID="cri-o://b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" gracePeriod=30 Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.849349 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.855273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerStarted","Data":"b519839458d1267c46eab42c7cac9e2222c60ab04de850b61b4b50b0594a3cad"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.863056 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config" (OuterVolumeSpecName: "config") pod "d754693c-f3ef-4b36-b827-f88a11c1b76a" (UID: "d754693c-f3ef-4b36-b827-f88a11c1b76a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.877930 4754 scope.go:117] "RemoveContainer" containerID="3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.879021 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerStarted","Data":"544f40ea5f9ad7b659e8e81028db30af1084c6548cef6985dd30eba18d50bb78"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.879053 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerStarted","Data":"91af205e7b3afcd2a6ee643bed05e8d5be3784292b57eebfc57b9aec2ecd7270"} Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.879785 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d754693c-f3ef-4b36-b827-f88a11c1b76a" (UID: "d754693c-f3ef-4b36-b827-f88a11c1b76a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.880767 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.880748941 podStartE2EDuration="6.880748941s" podCreationTimestamp="2026-01-05 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:09.871480388 +0000 UTC m=+1436.580664272" watchObservedRunningTime="2026-01-05 20:29:09.880748941 +0000 UTC m=+1436.589932815" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.908214 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.690504765 podStartE2EDuration="6.908191028s" podCreationTimestamp="2026-01-05 20:29:03 +0000 UTC" firstStartedPulling="2026-01-05 20:29:05.324778122 +0000 UTC m=+1432.033961986" lastFinishedPulling="2026-01-05 20:29:06.542464375 +0000 UTC m=+1433.251648249" observedRunningTime="2026-01-05 20:29:09.8926073 +0000 UTC m=+1436.601791174" watchObservedRunningTime="2026-01-05 20:29:09.908191028 +0000 UTC m=+1436.617374902" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.913940 4754 scope.go:117] "RemoveContainer" containerID="5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca" Jan 05 20:29:09 crc kubenswrapper[4754]: E0105 20:29:09.916665 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca\": container with ID starting with 5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca not found: ID does not exist" containerID="5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.916773 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca"} err="failed to get container status \"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca\": rpc error: code = NotFound desc = could not find container \"5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca\": container with ID starting with 5d518490397feb11906f343e48d9d4351c2229f2328a401e73075b7968823bca not found: ID does not exist" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.916865 4754 scope.go:117] "RemoveContainer" containerID="3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515" Jan 05 20:29:09 crc kubenswrapper[4754]: E0105 20:29:09.917264 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515\": container with ID starting with 3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515 not found: ID does not exist" containerID="3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.917319 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515"} err="failed to get container status \"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515\": rpc error: code = NotFound desc = could not find container \"3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515\": container with ID starting with 3a936e2e853b1adc051eae7f3c475e66763ab184e105d8d9115e794f4d48e515 not found: ID does not exist" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.925799 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:09 crc kubenswrapper[4754]: I0105 20:29:09.925966 4754 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d754693c-f3ef-4b36-b827-f88a11c1b76a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.261516 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.325844 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7656f8454b-v7vj9"] Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.506933 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.684787 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685028 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685113 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9zm7\" (UniqueName: \"kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685150 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685330 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685385 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685419 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685438 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle\") pod \"083b5c43-4bff-4782-98cb-8cd81687300d\" (UID: \"083b5c43-4bff-4782-98cb-8cd81687300d\") " Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.685946 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/083b5c43-4bff-4782-98cb-8cd81687300d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.686692 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs" (OuterVolumeSpecName: "logs") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.693005 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts" (OuterVolumeSpecName: "scripts") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.693494 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.694653 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7" (OuterVolumeSpecName: "kube-api-access-t9zm7") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "kube-api-access-t9zm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.719486 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.760561 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data" (OuterVolumeSpecName: "config-data") pod "083b5c43-4bff-4782-98cb-8cd81687300d" (UID: "083b5c43-4bff-4782-98cb-8cd81687300d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789225 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789276 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789308 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789322 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083b5c43-4bff-4782-98cb-8cd81687300d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789334 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9zm7\" (UniqueName: \"kubernetes.io/projected/083b5c43-4bff-4782-98cb-8cd81687300d-kube-api-access-t9zm7\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.789352 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/083b5c43-4bff-4782-98cb-8cd81687300d-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896716 4754 generic.go:334] "Generic (PLEG): container finished" podID="083b5c43-4bff-4782-98cb-8cd81687300d" containerID="b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" exitCode=0 Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896758 4754 generic.go:334] "Generic (PLEG): container finished" podID="083b5c43-4bff-4782-98cb-8cd81687300d" containerID="75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" exitCode=143 Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerDied","Data":"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690"} Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896844 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerDied","Data":"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4"} Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"083b5c43-4bff-4782-98cb-8cd81687300d","Type":"ContainerDied","Data":"b62355cf46c106b9973407b2a0b493c17b1cca579ce2fa6e6588061bad3ead81"} Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.896880 4754 scope.go:117] "RemoveContainer" containerID="b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.897091 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.903359 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerStarted","Data":"018398e57dd76b95b1a3f87039759cf34ff5365027844eb9826ca3c4a196548a"} Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.923170 4754 scope.go:117] "RemoveContainer" containerID="75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.951528 4754 scope.go:117] "RemoveContainer" containerID="b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.953436 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690\": container with ID starting with b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690 not found: ID does not exist" containerID="b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.953484 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690"} err="failed to get container status \"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690\": rpc error: code = NotFound desc = could not find container \"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690\": container with ID starting with b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690 not found: ID does not exist" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.953515 4754 scope.go:117] "RemoveContainer" containerID="75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.953948 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4\": container with ID starting with 75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4 not found: ID does not exist" containerID="75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.953976 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4"} err="failed to get container status \"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4\": rpc error: code = NotFound desc = could not find container \"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4\": container with ID starting with 75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4 not found: ID does not exist" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.954025 4754 scope.go:117] "RemoveContainer" containerID="b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.955888 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690"} err="failed to get container status \"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690\": rpc error: code = NotFound desc = could not find container \"b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690\": container with ID starting with b4bfcdb7ea1bf296770f91c6697b71f02db8ea213590d8232c58c996e57f9690 not found: ID does not exist" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.956009 4754 scope.go:117] "RemoveContainer" containerID="75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.956576 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4"} err="failed to get container status \"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4\": rpc error: code = NotFound desc = could not find container \"75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4\": container with ID starting with 75806887aa0b89ddfa8eaf536ca8c55606e4e7fe67640c6784f001cc5fbe75e4 not found: ID does not exist" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.956649 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.959536 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69c489cbf4-npzvh" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:42866->10.217.0.201:9311: read: connection reset by peer" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.959553 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69c489cbf4-npzvh" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:42864->10.217.0.201:9311: read: connection reset by peer" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.968684 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.992030 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.995367 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-api" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995394 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-api" Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.995424 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-httpd" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995433 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-httpd" Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.995447 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api-log" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995457 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api-log" Jan 05 20:29:10 crc kubenswrapper[4754]: E0105 20:29:10.995488 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995496 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995745 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-api" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995765 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995790 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" containerName="neutron-httpd" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.995815 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" containerName="cinder-api-log" Jan 05 20:29:10 crc kubenswrapper[4754]: I0105 20:29:10.997605 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.000371 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.000599 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.001959 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.010603 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.096433 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34d9b453-e1eb-49a6-883e-690d792a9922-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.096805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.096855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.096893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.097076 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d9b453-e1eb-49a6-883e-690d792a9922-logs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.097129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.097258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6597\" (UniqueName: \"kubernetes.io/projected/34d9b453-e1eb-49a6-883e-690d792a9922-kube-api-access-d6597\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.097372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data-custom\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.097417 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-scripts\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200545 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d9b453-e1eb-49a6-883e-690d792a9922-logs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200630 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6597\" (UniqueName: \"kubernetes.io/projected/34d9b453-e1eb-49a6-883e-690d792a9922-kube-api-access-d6597\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200669 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data-custom\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200696 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-scripts\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34d9b453-e1eb-49a6-883e-690d792a9922-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200801 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.200850 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.201270 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34d9b453-e1eb-49a6-883e-690d792a9922-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.202774 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d9b453-e1eb-49a6-883e-690d792a9922-logs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.205213 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-scripts\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.205882 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data-custom\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.206514 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.208025 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.213780 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.220118 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d9b453-e1eb-49a6-883e-690d792a9922-config-data\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.221257 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6597\" (UniqueName: \"kubernetes.io/projected/34d9b453-e1eb-49a6-883e-690d792a9922-kube-api-access-d6597\") pod \"cinder-api-0\" (UID: \"34d9b453-e1eb-49a6-883e-690d792a9922\") " pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.458016 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.498033 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.607702 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom\") pod \"9a760b6b-c03a-450c-a624-a918f6f28a8c\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.608174 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gmqx\" (UniqueName: \"kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx\") pod \"9a760b6b-c03a-450c-a624-a918f6f28a8c\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.608257 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs\") pod \"9a760b6b-c03a-450c-a624-a918f6f28a8c\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.608281 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle\") pod \"9a760b6b-c03a-450c-a624-a918f6f28a8c\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.608503 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data\") pod \"9a760b6b-c03a-450c-a624-a918f6f28a8c\" (UID: \"9a760b6b-c03a-450c-a624-a918f6f28a8c\") " Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.608697 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083b5c43-4bff-4782-98cb-8cd81687300d" path="/var/lib/kubelet/pods/083b5c43-4bff-4782-98cb-8cd81687300d/volumes" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.609029 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs" (OuterVolumeSpecName: "logs") pod "9a760b6b-c03a-450c-a624-a918f6f28a8c" (UID: "9a760b6b-c03a-450c-a624-a918f6f28a8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.609586 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d754693c-f3ef-4b36-b827-f88a11c1b76a" path="/var/lib/kubelet/pods/d754693c-f3ef-4b36-b827-f88a11c1b76a/volumes" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.614981 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx" (OuterVolumeSpecName: "kube-api-access-7gmqx") pod "9a760b6b-c03a-450c-a624-a918f6f28a8c" (UID: "9a760b6b-c03a-450c-a624-a918f6f28a8c"). InnerVolumeSpecName "kube-api-access-7gmqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.614991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a760b6b-c03a-450c-a624-a918f6f28a8c" (UID: "9a760b6b-c03a-450c-a624-a918f6f28a8c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.644488 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a760b6b-c03a-450c-a624-a918f6f28a8c" (UID: "9a760b6b-c03a-450c-a624-a918f6f28a8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.697500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data" (OuterVolumeSpecName: "config-data") pod "9a760b6b-c03a-450c-a624-a918f6f28a8c" (UID: "9a760b6b-c03a-450c-a624-a918f6f28a8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.711657 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.711691 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.711809 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gmqx\" (UniqueName: \"kubernetes.io/projected/9a760b6b-c03a-450c-a624-a918f6f28a8c-kube-api-access-7gmqx\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.712143 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a760b6b-c03a-450c-a624-a918f6f28a8c-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.712193 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a760b6b-c03a-450c-a624-a918f6f28a8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.918965 4754 generic.go:334] "Generic (PLEG): container finished" podID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerID="467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e" exitCode=0 Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.919057 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69c489cbf4-npzvh" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.919067 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerDied","Data":"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e"} Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.920960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69c489cbf4-npzvh" event={"ID":"9a760b6b-c03a-450c-a624-a918f6f28a8c","Type":"ContainerDied","Data":"a79101fa067c51c3840b0d804254daa4635c81cebfe58addb07e6b0eeb25e286"} Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.921000 4754 scope.go:117] "RemoveContainer" containerID="467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.954232 4754 scope.go:117] "RemoveContainer" containerID="bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1" Jan 05 20:29:11 crc kubenswrapper[4754]: I0105 20:29:11.983900 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.143269 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.158710 4754 scope.go:117] "RemoveContainer" containerID="467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e" Jan 05 20:29:12 crc kubenswrapper[4754]: E0105 20:29:12.159124 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e\": container with ID starting with 467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e not found: ID does not exist" containerID="467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e" Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.159183 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e"} err="failed to get container status \"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e\": rpc error: code = NotFound desc = could not find container \"467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e\": container with ID starting with 467ed58cd929a0332540b09c263286f673b80ba5249574b7caca9e1c38200f3e not found: ID does not exist" Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.159231 4754 scope.go:117] "RemoveContainer" containerID="bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1" Jan 05 20:29:12 crc kubenswrapper[4754]: E0105 20:29:12.160230 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1\": container with ID starting with bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1 not found: ID does not exist" containerID="bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1" Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.160320 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1"} err="failed to get container status \"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1\": rpc error: code = NotFound desc = could not find container \"bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1\": container with ID starting with bc4304f30c418c0efb44a1f23d05f3e2b5cf007eedb2f1c729ec21770dfa03c1 not found: ID does not exist" Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.163198 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69c489cbf4-npzvh"] Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.942771 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34d9b453-e1eb-49a6-883e-690d792a9922","Type":"ContainerStarted","Data":"370fcc3591b45b9df72713148407cc492b4a6137c783e8e3d7cced72ea94b28b"} Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.943213 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34d9b453-e1eb-49a6-883e-690d792a9922","Type":"ContainerStarted","Data":"71c195607ff0b5fdd7ac62d6f3dcaa1991ee532ce8c316a21210e8140d16ef9f"} Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.947655 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerStarted","Data":"687fafa7cd79de691a485d8eaa980dd12b8f1c241eb506c55ef2fa560f073c42"} Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.948064 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:29:12 crc kubenswrapper[4754]: I0105 20:29:12.976024 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.34200739 podStartE2EDuration="6.97600446s" podCreationTimestamp="2026-01-05 20:29:06 +0000 UTC" firstStartedPulling="2026-01-05 20:29:08.112501657 +0000 UTC m=+1434.821685531" lastFinishedPulling="2026-01-05 20:29:11.746498737 +0000 UTC m=+1438.455682601" observedRunningTime="2026-01-05 20:29:12.973161546 +0000 UTC m=+1439.682345430" watchObservedRunningTime="2026-01-05 20:29:12.97600446 +0000 UTC m=+1439.685188344" Jan 05 20:29:13 crc kubenswrapper[4754]: I0105 20:29:13.622065 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" path="/var/lib/kubelet/pods/9a760b6b-c03a-450c-a624-a918f6f28a8c/volumes" Jan 05 20:29:13 crc kubenswrapper[4754]: I0105 20:29:13.982028 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34d9b453-e1eb-49a6-883e-690d792a9922","Type":"ContainerStarted","Data":"12efbfb275f293d0c74f1d1bfb3a4618e9216fb7c9edebc24b9d5e21a04051cc"} Jan 05 20:29:13 crc kubenswrapper[4754]: I0105 20:29:13.982344 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.014696 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.014672544 podStartE2EDuration="4.014672544s" podCreationTimestamp="2026-01-05 20:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:14.000282897 +0000 UTC m=+1440.709466781" watchObservedRunningTime="2026-01-05 20:29:14.014672544 +0000 UTC m=+1440.723856428" Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.130640 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.529779 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.699145 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.833403 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:29:14 crc kubenswrapper[4754]: I0105 20:29:14.834468 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="dnsmasq-dns" containerID="cri-o://8ddadd8434f720ca960ece5aadd7630b23d833289603abb05539b3588a004367" gracePeriod=10 Jan 05 20:29:15 crc kubenswrapper[4754]: I0105 20:29:15.052701 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:16 crc kubenswrapper[4754]: I0105 20:29:16.017222 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="probe" containerID="cri-o://b519839458d1267c46eab42c7cac9e2222c60ab04de850b61b4b50b0594a3cad" gracePeriod=30 Jan 05 20:29:16 crc kubenswrapper[4754]: I0105 20:29:16.018466 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="cinder-scheduler" containerID="cri-o://d6b389a035b18c412981cbbb856938ae8ba46f708d5d96509e2f60ed6f743d23" gracePeriod=30 Jan 05 20:29:17 crc kubenswrapper[4754]: I0105 20:29:17.717822 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Jan 05 20:29:18 crc kubenswrapper[4754]: I0105 20:29:18.040641 4754 generic.go:334] "Generic (PLEG): container finished" podID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerID="b519839458d1267c46eab42c7cac9e2222c60ab04de850b61b4b50b0594a3cad" exitCode=0 Jan 05 20:29:18 crc kubenswrapper[4754]: I0105 20:29:18.040680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerDied","Data":"b519839458d1267c46eab42c7cac9e2222c60ab04de850b61b4b50b0594a3cad"} Jan 05 20:29:18 crc kubenswrapper[4754]: I0105 20:29:18.043146 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerID="8ddadd8434f720ca960ece5aadd7630b23d833289603abb05539b3588a004367" exitCode=0 Jan 05 20:29:18 crc kubenswrapper[4754]: I0105 20:29:18.043186 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" event={"ID":"8c33e0a9-6682-4462-bb72-e4d257a09450","Type":"ContainerDied","Data":"8ddadd8434f720ca960ece5aadd7630b23d833289603abb05539b3588a004367"} Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.027440 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.060033 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffdc87456-jd4rc" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.187869 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330490 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330542 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330579 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330645 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330771 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fltjp\" (UniqueName: \"kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.330811 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb\") pod \"8c33e0a9-6682-4462-bb72-e4d257a09450\" (UID: \"8c33e0a9-6682-4462-bb72-e4d257a09450\") " Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.351953 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp" (OuterVolumeSpecName: "kube-api-access-fltjp") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "kube-api-access-fltjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.407337 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config" (OuterVolumeSpecName: "config") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.433072 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.433104 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fltjp\" (UniqueName: \"kubernetes.io/projected/8c33e0a9-6682-4462-bb72-e4d257a09450-kube-api-access-fltjp\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.439281 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.467631 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.469952 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.484276 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c33e0a9-6682-4462-bb72-e4d257a09450" (UID: "8c33e0a9-6682-4462-bb72-e4d257a09450"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.535308 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.535560 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.535579 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.535592 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c33e0a9-6682-4462-bb72-e4d257a09450-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:19 crc kubenswrapper[4754]: I0105 20:29:19.879101 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67844b756c-gvtvn" Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.075801 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" event={"ID":"8c33e0a9-6682-4462-bb72-e4d257a09450","Type":"ContainerDied","Data":"8c2e8426f02b8cf8fcd5dd463caf0ade6ad6ff8677ae4d5587cc46d971e46d7c"} Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.075859 4754 scope.go:117] "RemoveContainer" containerID="8ddadd8434f720ca960ece5aadd7630b23d833289603abb05539b3588a004367" Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.076045 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-v7ttl" Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.112708 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.126909 4754 scope.go:117] "RemoveContainer" containerID="1b86205aaa484de6ee83c4a1f114950826b27f79ddec447bf4c6712597dd319d" Jan 05 20:29:20 crc kubenswrapper[4754]: I0105 20:29:20.137579 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-v7ttl"] Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.106628 4754 generic.go:334] "Generic (PLEG): container finished" podID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerID="d6b389a035b18c412981cbbb856938ae8ba46f708d5d96509e2f60ed6f743d23" exitCode=0 Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.106759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerDied","Data":"d6b389a035b18c412981cbbb856938ae8ba46f708d5d96509e2f60ed6f743d23"} Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.282150 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 20:29:21 crc kubenswrapper[4754]: E0105 20:29:21.283532 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="init" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.283583 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="init" Jan 05 20:29:21 crc kubenswrapper[4754]: E0105 20:29:21.283606 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.283614 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api" Jan 05 20:29:21 crc kubenswrapper[4754]: E0105 20:29:21.283640 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.283647 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" Jan 05 20:29:21 crc kubenswrapper[4754]: E0105 20:29:21.283690 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="dnsmasq-dns" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.283697 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="dnsmasq-dns" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.284067 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" containerName="dnsmasq-dns" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.284089 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.284107 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a760b6b-c03a-450c-a624-a918f6f28a8c" containerName="barbican-api-log" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.285173 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.288924 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-skk44" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.289142 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.289331 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.306117 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.387869 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config-secret\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.388040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.388158 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-combined-ca-bundle\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.388252 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72n2\" (UniqueName: \"kubernetes.io/projected/91975736-d23e-479b-bd43-68b9b1b3e450-kube-api-access-c72n2\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.500441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config-secret\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.500520 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.500600 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-combined-ca-bundle\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.500654 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72n2\" (UniqueName: \"kubernetes.io/projected/91975736-d23e-479b-bd43-68b9b1b3e450-kube-api-access-c72n2\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.503130 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.508051 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-combined-ca-bundle\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.544400 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72n2\" (UniqueName: \"kubernetes.io/projected/91975736-d23e-479b-bd43-68b9b1b3e450-kube-api-access-c72n2\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.546131 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/91975736-d23e-479b-bd43-68b9b1b3e450-openstack-config-secret\") pod \"openstackclient\" (UID: \"91975736-d23e-479b-bd43-68b9b1b3e450\") " pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.606926 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c33e0a9-6682-4462-bb72-e4d257a09450" path="/var/lib/kubelet/pods/8c33e0a9-6682-4462-bb72-e4d257a09450/volumes" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.618385 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.757724 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.914660 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vc8g\" (UniqueName: \"kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.914738 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.914846 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.914900 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.914924 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.915006 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts\") pod \"a5623850-560b-42ca-9b3e-2e690fb5e7af\" (UID: \"a5623850-560b-42ca-9b3e-2e690fb5e7af\") " Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.915134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.915898 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5623850-560b-42ca-9b3e-2e690fb5e7af-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.922390 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.924211 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts" (OuterVolumeSpecName: "scripts") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:21 crc kubenswrapper[4754]: I0105 20:29:21.941556 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g" (OuterVolumeSpecName: "kube-api-access-9vc8g") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "kube-api-access-9vc8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.009417 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.018791 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vc8g\" (UniqueName: \"kubernetes.io/projected/a5623850-560b-42ca-9b3e-2e690fb5e7af-kube-api-access-9vc8g\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.018825 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.018833 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.018843 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.042593 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data" (OuterVolumeSpecName: "config-data") pod "a5623850-560b-42ca-9b3e-2e690fb5e7af" (UID: "a5623850-560b-42ca-9b3e-2e690fb5e7af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.127278 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5623850-560b-42ca-9b3e-2e690fb5e7af","Type":"ContainerDied","Data":"3a91d992ba23b4ce6be7fb70bc79976355b25f59560e2b5d32591b2b539831c5"} Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.127359 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.127383 4754 scope.go:117] "RemoveContainer" containerID="b519839458d1267c46eab42c7cac9e2222c60ab04de850b61b4b50b0594a3cad" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.131707 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5623850-560b-42ca-9b3e-2e690fb5e7af-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.163018 4754 scope.go:117] "RemoveContainer" containerID="d6b389a035b18c412981cbbb856938ae8ba46f708d5d96509e2f60ed6f743d23" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.171233 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.200003 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.215666 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:22 crc kubenswrapper[4754]: E0105 20:29:22.216216 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="cinder-scheduler" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.216232 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="cinder-scheduler" Jan 05 20:29:22 crc kubenswrapper[4754]: E0105 20:29:22.216267 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="probe" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.216273 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="probe" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.216528 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="probe" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.216538 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" containerName="cinder-scheduler" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.218560 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.224922 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.248779 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.277459 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.341924 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.342005 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.342354 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.342532 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d71b2d3-db78-4e52-af70-e5108d39502b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.342631 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq5f\" (UniqueName: \"kubernetes.io/projected/2d71b2d3-db78-4e52-af70-e5108d39502b-kube-api-access-9rq5f\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.342797 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445578 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445777 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445844 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445883 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d71b2d3-db78-4e52-af70-e5108d39502b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.445936 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq5f\" (UniqueName: \"kubernetes.io/projected/2d71b2d3-db78-4e52-af70-e5108d39502b-kube-api-access-9rq5f\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.446026 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d71b2d3-db78-4e52-af70-e5108d39502b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.457606 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.457624 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.459094 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.459816 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d71b2d3-db78-4e52-af70-e5108d39502b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.469918 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq5f\" (UniqueName: \"kubernetes.io/projected/2d71b2d3-db78-4e52-af70-e5108d39502b-kube-api-access-9rq5f\") pod \"cinder-scheduler-0\" (UID: \"2d71b2d3-db78-4e52-af70-e5108d39502b\") " pod="openstack/cinder-scheduler-0" Jan 05 20:29:22 crc kubenswrapper[4754]: I0105 20:29:22.553587 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 20:29:23 crc kubenswrapper[4754]: I0105 20:29:23.146154 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"91975736-d23e-479b-bd43-68b9b1b3e450","Type":"ContainerStarted","Data":"7849f38206245216db29e20c30abf4419fed3d75dcbc99b13860779fb94c0c90"} Jan 05 20:29:23 crc kubenswrapper[4754]: I0105 20:29:23.253727 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 20:29:23 crc kubenswrapper[4754]: I0105 20:29:23.615532 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5623850-560b-42ca-9b3e-2e690fb5e7af" path="/var/lib/kubelet/pods/a5623850-560b-42ca-9b3e-2e690fb5e7af/volumes" Jan 05 20:29:24 crc kubenswrapper[4754]: I0105 20:29:24.002984 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 05 20:29:24 crc kubenswrapper[4754]: I0105 20:29:24.227427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d71b2d3-db78-4e52-af70-e5108d39502b","Type":"ContainerStarted","Data":"3a89b83ea5be92ca5e9f43c30abe5d876dae5deb4fefeae5d114122a6d756d84"} Jan 05 20:29:24 crc kubenswrapper[4754]: I0105 20:29:24.227840 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d71b2d3-db78-4e52-af70-e5108d39502b","Type":"ContainerStarted","Data":"8634d2db630192117de5313ccc5dec4a55110fa2c555ed9e28d3c002799591ef"} Jan 05 20:29:25 crc kubenswrapper[4754]: I0105 20:29:25.238595 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d71b2d3-db78-4e52-af70-e5108d39502b","Type":"ContainerStarted","Data":"4fb6d7c09b957f50b3f3be97e9528110ba4ad50c556b82776b2beab305a3c73d"} Jan 05 20:29:25 crc kubenswrapper[4754]: I0105 20:29:25.270407 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.270389647 podStartE2EDuration="3.270389647s" podCreationTimestamp="2026-01-05 20:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:25.259842901 +0000 UTC m=+1451.969026775" watchObservedRunningTime="2026-01-05 20:29:25.270389647 +0000 UTC m=+1451.979573521" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.553932 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.615444 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6ff666bbf9-t252x"] Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.647829 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6ff666bbf9-t252x"] Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.647951 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.657974 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.658310 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.658453 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803537 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-public-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803629 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-run-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803663 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-log-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803712 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-etc-swift\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjjf\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-kube-api-access-gkjjf\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803821 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-internal-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-config-data\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.803873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-combined-ca-bundle\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.905230 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-etc-swift\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.905368 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjjf\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-kube-api-access-gkjjf\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.905411 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-internal-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.905431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-config-data\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.905667 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-combined-ca-bundle\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.906355 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-public-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.906398 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-run-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.906428 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-log-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.906926 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-log-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.906923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce48436-7086-4501-9b9d-952b965fb028-run-httpd\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.920189 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-combined-ca-bundle\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.921238 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-public-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.921410 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-etc-swift\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.924557 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-internal-tls-certs\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.930271 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjjf\" (UniqueName: \"kubernetes.io/projected/2ce48436-7086-4501-9b9d-952b965fb028-kube-api-access-gkjjf\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.930445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce48436-7086-4501-9b9d-952b965fb028-config-data\") pod \"swift-proxy-6ff666bbf9-t252x\" (UID: \"2ce48436-7086-4501-9b9d-952b965fb028\") " pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:27 crc kubenswrapper[4754]: I0105 20:29:27.998926 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:28 crc kubenswrapper[4754]: W0105 20:29:28.614808 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce48436_7086_4501_9b9d_952b965fb028.slice/crio-361e599f5098f84b5c98aa5986c54f819a5d327452ffc68ad921952101961454 WatchSource:0}: Error finding container 361e599f5098f84b5c98aa5986c54f819a5d327452ffc68ad921952101961454: Status 404 returned error can't find the container with id 361e599f5098f84b5c98aa5986c54f819a5d327452ffc68ad921952101961454 Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.623969 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6ff666bbf9-t252x"] Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.849428 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.860642 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.864879 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.871630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.873399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.876392 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-58g9m" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.876630 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.888633 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.899081 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.939745 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.939823 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.939898 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.939924 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940009 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940113 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knm88\" (UniqueName: \"kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqsm\" (UniqueName: \"kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940197 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:28 crc kubenswrapper[4754]: I0105 20:29:28.940273 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.011340 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.013619 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.017502 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.042974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043019 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043086 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043111 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043159 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043217 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knm88\" (UniqueName: \"kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043242 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqsm\" (UniqueName: \"kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.043301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.050591 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.051981 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.054206 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.061476 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.061530 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.062053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.062596 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.065861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.071101 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.079278 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.083724 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.086923 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.100078 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.114729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqsm\" (UniqueName: \"kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm\") pod \"dnsmasq-dns-7d978555f9-p7xv4\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.129783 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knm88\" (UniqueName: \"kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88\") pod \"heat-engine-cfdcfcf78-fd4fw\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.151085 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.151196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.151319 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwsc\" (UniqueName: \"kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.151395 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.212529 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.253531 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.253616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgn4h\" (UniqueName: \"kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.253661 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.253692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.253710 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.256310 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwsc\" (UniqueName: \"kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.256373 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.256540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.261654 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.262736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.269557 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.306513 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwsc\" (UniqueName: \"kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc\") pod \"heat-api-5bc975666b-pjcfg\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.324345 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.369095 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.374427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.374549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.374765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgn4h\" (UniqueName: \"kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.390976 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.392050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.392807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.404068 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgn4h\" (UniqueName: \"kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h\") pod \"heat-cfnapi-5f447fcb5f-cgsxz\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.418093 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.428471 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6ff666bbf9-t252x" event={"ID":"2ce48436-7086-4501-9b9d-952b965fb028","Type":"ContainerStarted","Data":"361e599f5098f84b5c98aa5986c54f819a5d327452ffc68ad921952101961454"} Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.442733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.959242 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.959803 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-central-agent" containerID="cri-o://91af205e7b3afcd2a6ee643bed05e8d5be3784292b57eebfc57b9aec2ecd7270" gracePeriod=30 Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.960729 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" containerID="cri-o://687fafa7cd79de691a485d8eaa980dd12b8f1c241eb506c55ef2fa560f073c42" gracePeriod=30 Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.960785 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="sg-core" containerID="cri-o://018398e57dd76b95b1a3f87039759cf34ff5365027844eb9826ca3c4a196548a" gracePeriod=30 Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.960820 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-notification-agent" containerID="cri-o://544f40ea5f9ad7b659e8e81028db30af1084c6548cef6985dd30eba18d50bb78" gracePeriod=30 Jan 05 20:29:29 crc kubenswrapper[4754]: I0105 20:29:29.980123 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.074476 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": read tcp 10.217.0.2:59092->10.217.0.206:3000: read: connection reset by peer" Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.178125 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.194781 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.393845 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.447859 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6ff666bbf9-t252x" event={"ID":"2ce48436-7086-4501-9b9d-952b965fb028","Type":"ContainerStarted","Data":"1548181f0b7d7cc78b8c42e64da8300637c500f412488617761ea341ba962ce7"} Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.447908 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6ff666bbf9-t252x" event={"ID":"2ce48436-7086-4501-9b9d-952b965fb028","Type":"ContainerStarted","Data":"77bfca396ce1a34dd8f4a6fd84678a47cfa52ec3470ecf126634d9703177415b"} Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.448155 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.448238 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.456525 4754 generic.go:334] "Generic (PLEG): container finished" podID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerID="687fafa7cd79de691a485d8eaa980dd12b8f1c241eb506c55ef2fa560f073c42" exitCode=0 Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.456674 4754 generic.go:334] "Generic (PLEG): container finished" podID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerID="018398e57dd76b95b1a3f87039759cf34ff5365027844eb9826ca3c4a196548a" exitCode=2 Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.456708 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerDied","Data":"687fafa7cd79de691a485d8eaa980dd12b8f1c241eb506c55ef2fa560f073c42"} Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.456757 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerDied","Data":"018398e57dd76b95b1a3f87039759cf34ff5365027844eb9826ca3c4a196548a"} Jan 05 20:29:30 crc kubenswrapper[4754]: I0105 20:29:30.480279 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6ff666bbf9-t252x" podStartSLOduration=3.480259933 podStartE2EDuration="3.480259933s" podCreationTimestamp="2026-01-05 20:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:30.475887728 +0000 UTC m=+1457.185071602" watchObservedRunningTime="2026-01-05 20:29:30.480259933 +0000 UTC m=+1457.189443797" Jan 05 20:29:31 crc kubenswrapper[4754]: I0105 20:29:31.481043 4754 generic.go:334] "Generic (PLEG): container finished" podID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerID="91af205e7b3afcd2a6ee643bed05e8d5be3784292b57eebfc57b9aec2ecd7270" exitCode=0 Jan 05 20:29:31 crc kubenswrapper[4754]: I0105 20:29:31.481723 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerDied","Data":"91af205e7b3afcd2a6ee643bed05e8d5be3784292b57eebfc57b9aec2ecd7270"} Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.500841 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.500996 4754 generic.go:334] "Generic (PLEG): container finished" podID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerID="544f40ea5f9ad7b659e8e81028db30af1084c6548cef6985dd30eba18d50bb78" exitCode=0 Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.501127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerDied","Data":"544f40ea5f9ad7b659e8e81028db30af1084c6548cef6985dd30eba18d50bb78"} Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.501369 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-log" containerID="cri-o://e3c82caeb13d5b2201dbea5bb4bf5d07fbf194feb2ec22a360be7d681501d9b8" gracePeriod=30 Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.501455 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-httpd" containerID="cri-o://0edd94ba41a347f0172d53761a6a79a528ca3c33bbf6bef6f31f6c7ac467566c" gracePeriod=30 Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.528929 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-external-api-0" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": EOF" Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.529315 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": EOF" Jan 05 20:29:32 crc kubenswrapper[4754]: I0105 20:29:32.871041 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 20:29:33 crc kubenswrapper[4754]: I0105 20:29:33.523368 4754 generic.go:334] "Generic (PLEG): container finished" podID="9047860d-2da5-476b-9340-38244322fb95" containerID="e3c82caeb13d5b2201dbea5bb4bf5d07fbf194feb2ec22a360be7d681501d9b8" exitCode=143 Jan 05 20:29:33 crc kubenswrapper[4754]: I0105 20:29:33.523426 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerDied","Data":"e3c82caeb13d5b2201dbea5bb4bf5d07fbf194feb2ec22a360be7d681501d9b8"} Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.086786 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gfzxd"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.088910 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.097990 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gfzxd"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.188629 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdtq\" (UniqueName: \"kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.188731 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.193122 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-h42n5"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.196437 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.230752 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9214-account-create-update-qwl6t"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.232214 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.236917 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.260484 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h42n5"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.291924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.292091 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxvv\" (UniqueName: \"kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.292180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skdtq\" (UniqueName: \"kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.292224 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.293183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.309269 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9214-account-create-update-qwl6t"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.319198 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skdtq\" (UniqueName: \"kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq\") pod \"nova-api-db-create-gfzxd\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.393925 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxvv\" (UniqueName: \"kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.394669 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65k9\" (UniqueName: \"kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.394838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.395193 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.395812 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.404377 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kxzd4"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.406490 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.428522 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxvv\" (UniqueName: \"kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv\") pod \"nova-cell0-db-create-h42n5\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.438746 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fe16-account-create-update-k6rtd"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.472562 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.478331 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.489840 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.538543 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.540584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65k9\" (UniqueName: \"kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.540648 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkn8\" (UniqueName: \"kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.540686 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.540854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.545730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.572836 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kxzd4"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.604667 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe16-account-create-update-k6rtd"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.667041 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65k9\" (UniqueName: \"kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9\") pod \"nova-api-9214-account-create-update-qwl6t\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.668449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpg5c\" (UniqueName: \"kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.668684 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.668984 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.669136 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpkn8\" (UniqueName: \"kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.669458 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.751003 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpkn8\" (UniqueName: \"kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8\") pod \"nova-cell1-db-create-kxzd4\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.772015 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpg5c\" (UniqueName: \"kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.772273 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.782105 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.842114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpg5c\" (UniqueName: \"kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c\") pod \"nova-cell0-fe16-account-create-update-k6rtd\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.842361 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.846427 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.849046 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.859908 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.863671 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874006 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874049 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874072 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874109 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874147 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874181 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfhz\" (UniqueName: \"kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.874277 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xdz\" (UniqueName: \"kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.879324 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.911810 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.913838 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.945118 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.965389 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977309 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977360 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977380 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977401 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977417 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977455 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977497 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977518 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7q48\" (UniqueName: \"kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977533 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977563 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977590 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfhz\" (UniqueName: \"kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.977658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xdz\" (UniqueName: \"kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.993614 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.993671 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.995932 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.996940 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:36 crc kubenswrapper[4754]: I0105 20:29:36.999108 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.000250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.004490 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c9ef-account-create-update-zq77j"] Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.005428 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfhz\" (UniqueName: \"kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz\") pod \"heat-engine-78955f8dfd-95rvt\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.006083 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.007812 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xdz\" (UniqueName: \"kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz\") pod \"heat-api-5c4fdffcf-f4rzg\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.014484 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.029497 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.045776 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c9ef-account-create-update-zq77j"] Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079609 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7q48\" (UniqueName: \"kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079751 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2t8l\" (UniqueName: \"kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.079851 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.088327 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.089747 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.091264 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.105134 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7q48\" (UniqueName: \"kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48\") pod \"heat-cfnapi-679bf7799d-5zzsg\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.129710 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.181127 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2t8l\" (UniqueName: \"kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.181243 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.181983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.185722 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.198768 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.199538 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2t8l\" (UniqueName: \"kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l\") pod \"nova-cell1-c9ef-account-create-update-zq77j\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.227531 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": dial tcp 10.217.0.206:3000: connect: connection refused" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.244682 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:37 crc kubenswrapper[4754]: I0105 20:29:37.332175 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.005421 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.009364 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6ff666bbf9-t252x" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.934902 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.948712 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.950401 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.955954 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.957267 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 05 20:29:38 crc kubenswrapper[4754]: I0105 20:29:38.979410 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.027610 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.047630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.049553 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.055333 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.055610 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.068642 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135438 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135486 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135523 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thst4\" (UniqueName: \"kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135565 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135585 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135614 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdkw\" (UniqueName: \"kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135634 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135653 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135673 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.135805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238370 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238401 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238420 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thst4\" (UniqueName: \"kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238437 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238461 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238502 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238534 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sdkw\" (UniqueName: \"kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238574 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.238595 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.245846 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.246664 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.249209 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.251208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.251252 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.252236 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.253326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.251222 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.254809 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.256138 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.261858 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thst4\" (UniqueName: \"kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4\") pod \"heat-api-6c8f698c95-f22ns\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.261980 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sdkw\" (UniqueName: \"kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw\") pod \"heat-cfnapi-675955f8c4-92lhz\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.277829 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.370545 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:39 crc kubenswrapper[4754]: E0105 20:29:39.458713 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 05 20:29:39 crc kubenswrapper[4754]: E0105 20:29:39.458911 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d5h649h696hbhdfh655h654hd7h7h684h5bfh67fh5d6h5b7h88h5c6h55ch65chchf6h6h558h5fdh55h9dh5bch59dh554h66bh59bh94h547q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c72n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(91975736-d23e-479b-bd43-68b9b1b3e450): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:29:39 crc kubenswrapper[4754]: E0105 20:29:39.460090 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="91975736-d23e-479b-bd43-68b9b1b3e450" Jan 05 20:29:39 crc kubenswrapper[4754]: W0105 20:29:39.460260 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod081c0364_d64a_4977_b1ac_0fcb2bdc5bab.slice/crio-dd8a8adb04752bb2482089147cc5800771d4fbe0623c86e34f5eeeb30bd8bd9d WatchSource:0}: Error finding container dd8a8adb04752bb2482089147cc5800771d4fbe0623c86e34f5eeeb30bd8bd9d: Status 404 returned error can't find the container with id dd8a8adb04752bb2482089147cc5800771d4fbe0623c86e34f5eeeb30bd8bd9d Jan 05 20:29:39 crc kubenswrapper[4754]: W0105 20:29:39.479549 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-c67f2ff1363abe39b084e4fde8c5c27f6e9fef1c643b5cbb32ce6f91f9ae0db1 WatchSource:0}: Error finding container c67f2ff1363abe39b084e4fde8c5c27f6e9fef1c643b5cbb32ce6f91f9ae0db1: Status 404 returned error can't find the container with id c67f2ff1363abe39b084e4fde8c5c27f6e9fef1c643b5cbb32ce6f91f9ae0db1 Jan 05 20:29:39 crc kubenswrapper[4754]: W0105 20:29:39.486091 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-f9d15fcbf84fce28b73d175956f7c608be765810a31368b606d32506b732375d WatchSource:0}: Error finding container f9d15fcbf84fce28b73d175956f7c608be765810a31368b606d32506b732375d: Status 404 returned error can't find the container with id f9d15fcbf84fce28b73d175956f7c608be765810a31368b606d32506b732375d Jan 05 20:29:39 crc kubenswrapper[4754]: W0105 20:29:39.487931 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42a612ab_0883_4f35_b15e_f937f1f2de36.slice/crio-f46c8a1f4792bb0265b536f2c73b206415f3523ceb643459fe16be4ab1c91809 WatchSource:0}: Error finding container f46c8a1f4792bb0265b536f2c73b206415f3523ceb643459fe16be4ab1c91809: Status 404 returned error can't find the container with id f46c8a1f4792bb0265b536f2c73b206415f3523ceb643459fe16be4ab1c91809 Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.682526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cfdcfcf78-fd4fw" event={"ID":"081c0364-d64a-4977-b1ac-0fcb2bdc5bab","Type":"ContainerStarted","Data":"dd8a8adb04752bb2482089147cc5800771d4fbe0623c86e34f5eeeb30bd8bd9d"} Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.715514 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" event={"ID":"215c4148-c55a-49d8-8b2e-301cc8912519","Type":"ContainerStarted","Data":"c67f2ff1363abe39b084e4fde8c5c27f6e9fef1c643b5cbb32ce6f91f9ae0db1"} Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.742459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc975666b-pjcfg" event={"ID":"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed","Type":"ContainerStarted","Data":"f9d15fcbf84fce28b73d175956f7c608be765810a31368b606d32506b732375d"} Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.800560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" event={"ID":"42a612ab-0883-4f35-b15e-f937f1f2de36","Type":"ContainerStarted","Data":"f46c8a1f4792bb0265b536f2c73b206415f3523ceb643459fe16be4ab1c91809"} Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.844529 4754 generic.go:334] "Generic (PLEG): container finished" podID="9047860d-2da5-476b-9340-38244322fb95" containerID="0edd94ba41a347f0172d53761a6a79a528ca3c33bbf6bef6f31f6c7ac467566c" exitCode=0 Jan 05 20:29:39 crc kubenswrapper[4754]: I0105 20:29:39.844633 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerDied","Data":"0edd94ba41a347f0172d53761a6a79a528ca3c33bbf6bef6f31f6c7ac467566c"} Jan 05 20:29:39 crc kubenswrapper[4754]: E0105 20:29:39.860129 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="91975736-d23e-479b-bd43-68b9b1b3e450" Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.866549 4754 generic.go:334] "Generic (PLEG): container finished" podID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerID="c9dcaff57c3d42a7e9cb3935e1ef9004f8ab48613135105a010dc2c97e7610f4" exitCode=0 Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.867903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" event={"ID":"42a612ab-0883-4f35-b15e-f937f1f2de36","Type":"ContainerDied","Data":"c9dcaff57c3d42a7e9cb3935e1ef9004f8ab48613135105a010dc2c97e7610f4"} Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.901950 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cfdcfcf78-fd4fw" event={"ID":"081c0364-d64a-4977-b1ac-0fcb2bdc5bab","Type":"ContainerStarted","Data":"9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88"} Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.903643 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.927743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84bba751-7ae7-4f46-9673-fb30b2ba2496","Type":"ContainerDied","Data":"2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a"} Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.927799 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd48ee4c8e9e08034c2cd4af5ae03ccd126f43eb84a6a26481256bd9d58781a" Jan 05 20:29:40 crc kubenswrapper[4754]: I0105 20:29:40.932448 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-cfdcfcf78-fd4fw" podStartSLOduration=12.932415991 podStartE2EDuration="12.932415991s" podCreationTimestamp="2026-01-05 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:40.924711439 +0000 UTC m=+1467.633895313" watchObservedRunningTime="2026-01-05 20:29:40.932415991 +0000 UTC m=+1467.641599855" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.024024 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.085425 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.163993 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164482 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164544 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164564 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164607 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164636 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164669 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwjwm\" (UniqueName: \"kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164733 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164801 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164835 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krbkb\" (UniqueName: \"kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164849 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164874 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164942 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.164976 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.165004 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle\") pod \"9047860d-2da5-476b-9340-38244322fb95\" (UID: \"9047860d-2da5-476b-9340-38244322fb95\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.165172 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.177604 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.177852 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs" (OuterVolumeSpecName: "logs") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.178310 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.180801 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm" (OuterVolumeSpecName: "kube-api-access-qwjwm") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "kube-api-access-qwjwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.182660 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.184442 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts" (OuterVolumeSpecName: "scripts") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.201882 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb" (OuterVolumeSpecName: "kube-api-access-krbkb") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "kube-api-access-krbkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.212743 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts" (OuterVolumeSpecName: "scripts") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.262928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279" (OuterVolumeSpecName: "glance") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "pvc-5e52a243-20bd-4eec-aa78-a75239804279". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.281518 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.298816 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") pod \"84bba751-7ae7-4f46-9673-fb30b2ba2496\" (UID: \"84bba751-7ae7-4f46-9673-fb30b2ba2496\") " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300376 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300425 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") on node \"crc\" " Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300443 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9047860d-2da5-476b-9340-38244322fb95-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300459 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwjwm\" (UniqueName: \"kubernetes.io/projected/84bba751-7ae7-4f46-9673-fb30b2ba2496-kube-api-access-qwjwm\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300473 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krbkb\" (UniqueName: \"kubernetes.io/projected/9047860d-2da5-476b-9340-38244322fb95-kube-api-access-krbkb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300484 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300497 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84bba751-7ae7-4f46-9673-fb30b2ba2496-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.300509 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: W0105 20:29:41.301164 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/84bba751-7ae7-4f46-9673-fb30b2ba2496/volumes/kubernetes.io~secret/sg-core-conf-yaml Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.301199 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.340338 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.350810 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.350952 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5e52a243-20bd-4eec-aa78-a75239804279" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279") on node "crc" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.351037 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.408004 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.408035 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.408046 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.408060 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.489810 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.495846 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data" (OuterVolumeSpecName: "config-data") pod "9047860d-2da5-476b-9340-38244322fb95" (UID: "9047860d-2da5-476b-9340-38244322fb95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.510893 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.510926 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9047860d-2da5-476b-9340-38244322fb95-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.538210 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data" (OuterVolumeSpecName: "config-data") pod "84bba751-7ae7-4f46-9673-fb30b2ba2496" (UID: "84bba751-7ae7-4f46-9673-fb30b2ba2496"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.613132 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84bba751-7ae7-4f46-9673-fb30b2ba2496-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.616520 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.624086 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c9ef-account-create-update-zq77j"] Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.634833 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gfzxd"] Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.669526 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:41 crc kubenswrapper[4754]: W0105 20:29:41.676141 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod773cb53b_3bb1_4b11_bb3d_c11c352fcdfb.slice/crio-463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5 WatchSource:0}: Error finding container 463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5: Status 404 returned error can't find the container with id 463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5 Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.957403 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9047860d-2da5-476b-9340-38244322fb95","Type":"ContainerDied","Data":"a4b1d80274944e9b04a59521277af91bca0fe79633d45231e3524c9ca5f87cff"} Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.957810 4754 scope.go:117] "RemoveContainer" containerID="0edd94ba41a347f0172d53761a6a79a528ca3c33bbf6bef6f31f6c7ac467566c" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.957667 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.964355 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gfzxd" event={"ID":"53d3e036-3e71-4b72-ae09-2d84f5b12b6d","Type":"ContainerStarted","Data":"431c87c790bd70542340896b9c26b133f84ef57157a34419e9b5209c3574c89f"} Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.966958 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c4fdffcf-f4rzg" event={"ID":"71c4958d-a956-4009-8f00-e4f16cab1b6b","Type":"ContainerStarted","Data":"6c65b7e203f23ae639339f8b26f2ab30f95ace012804727ef306b0e341f44904"} Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.970624 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" event={"ID":"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb","Type":"ContainerStarted","Data":"463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5"} Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.990252 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" event={"ID":"42a612ab-0883-4f35-b15e-f937f1f2de36","Type":"ContainerStarted","Data":"d3d901905e12d0c408a25d66c004157974c39268e85b0f27ab79f0a6f2fa9446"} Jan 05 20:29:41 crc kubenswrapper[4754]: I0105 20:29:41.990346 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.019745 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.020029 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78955f8dfd-95rvt" event={"ID":"ad58f356-3e52-422b-89ba-0db520cce910","Type":"ContainerStarted","Data":"36ac7aa52ad0383e4573f0ab9a81a6392ba5001700a411faa71dd7f5b085a2fe"} Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.021359 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.044063 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.053262 4754 scope.go:117] "RemoveContainer" containerID="e3c82caeb13d5b2201dbea5bb4bf5d07fbf194feb2ec22a360be7d681501d9b8" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.059843 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060449 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-log" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060466 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-log" Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060478 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060485 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060503 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="sg-core" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060508 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="sg-core" Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060532 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-notification-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060538 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-notification-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060572 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060578 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: E0105 20:29:42.060592 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-central-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060598 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-central-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060853 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-notification-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060881 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="ceilometer-central-agent" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060892 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-log" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060903 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="sg-core" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060916 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" containerName="proxy-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.060924 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9047860d-2da5-476b-9340-38244322fb95" containerName="glance-httpd" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.067619 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.073430 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.075902 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" podStartSLOduration=14.073097668 podStartE2EDuration="14.073097668s" podCreationTimestamp="2026-01-05 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:42.031056456 +0000 UTC m=+1468.740240330" watchObservedRunningTime="2026-01-05 20:29:42.073097668 +0000 UTC m=+1468.782281542" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.078592 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.078973 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.119065 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.129860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130002 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgg2\" (UniqueName: \"kubernetes.io/projected/ff2c492b-de99-47c6-8cad-42a1427039f1-kube-api-access-7cgg2\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130112 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-logs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130485 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130618 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.130848 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.140749 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.156223 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.160534 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.167376 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.167561 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.211576 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235749 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8gtr\" (UniqueName: \"kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235879 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235920 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235946 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.235982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cgg2\" (UniqueName: \"kubernetes.io/projected/ff2c492b-de99-47c6-8cad-42a1427039f1-kube-api-access-7cgg2\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236068 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-logs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236242 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236524 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236712 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff2c492b-de99-47c6-8cad-42a1427039f1-logs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236786 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236815 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.236927 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.244434 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.246224 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.254983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.255760 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.255789 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c68df80c09ca2a8e0af8b9c221d9793bb148adba68887bcbeec5db986c7de6/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.257802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cgg2\" (UniqueName: \"kubernetes.io/projected/ff2c492b-de99-47c6-8cad-42a1427039f1-kube-api-access-7cgg2\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.264532 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff2c492b-de99-47c6-8cad-42a1427039f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.338881 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339210 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339249 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8gtr\" (UniqueName: \"kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339285 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339344 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.339396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.343576 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.343705 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.343902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.357226 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.379874 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.380531 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.401216 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8gtr\" (UniqueName: \"kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr\") pod \"ceilometer-0\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.443069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e52a243-20bd-4eec-aa78-a75239804279\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e52a243-20bd-4eec-aa78-a75239804279\") pod \"glance-default-external-api-0\" (UID: \"ff2c492b-de99-47c6-8cad-42a1427039f1\") " pod="openstack/glance-default-external-api-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.492452 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.504662 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe16-account-create-update-k6rtd"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.554073 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.576242 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kxzd4"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.587540 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.601405 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.615411 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h42n5"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.628218 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9214-account-create-update-qwl6t"] Jan 05 20:29:42 crc kubenswrapper[4754]: I0105 20:29:42.721723 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.032760 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" event={"ID":"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb","Type":"ContainerStarted","Data":"6bd610def9df2801ebca785fff696a92be9ede2f222011869f053647e34a32e8"} Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.037127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78955f8dfd-95rvt" event={"ID":"ad58f356-3e52-422b-89ba-0db520cce910","Type":"ContainerStarted","Data":"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56"} Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.037243 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.049594 4754 generic.go:334] "Generic (PLEG): container finished" podID="53d3e036-3e71-4b72-ae09-2d84f5b12b6d" containerID="37d7cf033c363ff59fbfcfe09bf44d181103a14cee337cb684e0d9dfd0fd7253" exitCode=0 Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.049663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gfzxd" event={"ID":"53d3e036-3e71-4b72-ae09-2d84f5b12b6d","Type":"ContainerDied","Data":"37d7cf033c363ff59fbfcfe09bf44d181103a14cee337cb684e0d9dfd0fd7253"} Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.054036 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" podStartSLOduration=7.054021229 podStartE2EDuration="7.054021229s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:43.052416997 +0000 UTC m=+1469.761600871" watchObservedRunningTime="2026-01-05 20:29:43.054021229 +0000 UTC m=+1469.763205103" Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.089164 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-78955f8dfd-95rvt" podStartSLOduration=7.089144979 podStartE2EDuration="7.089144979s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:43.083557842 +0000 UTC m=+1469.792741726" watchObservedRunningTime="2026-01-05 20:29:43.089144979 +0000 UTC m=+1469.798328853" Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.359209 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.359720 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-log" containerID="cri-o://146d9ffc3b8e9ece5f9e2721869ea9a8b5e8fa909e02c39efdf21d47085a2c30" gracePeriod=30 Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.360149 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-httpd" containerID="cri-o://38fc911195a2fddb8faae4d68aa93e6c536d2633ca852d8558b90ab47cda31ac" gracePeriod=30 Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.604485 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84bba751-7ae7-4f46-9673-fb30b2ba2496" path="/var/lib/kubelet/pods/84bba751-7ae7-4f46-9673-fb30b2ba2496/volumes" Jan 05 20:29:43 crc kubenswrapper[4754]: I0105 20:29:43.605283 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9047860d-2da5-476b-9340-38244322fb95" path="/var/lib/kubelet/pods/9047860d-2da5-476b-9340-38244322fb95/volumes" Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.084943 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675955f8c4-92lhz" event={"ID":"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b","Type":"ContainerStarted","Data":"cd566cba60b758090e7e96a99e2a28334bfd205da7a0c0a740a8606e0d36f58b"} Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.099616 4754 generic.go:334] "Generic (PLEG): container finished" podID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerID="146d9ffc3b8e9ece5f9e2721869ea9a8b5e8fa909e02c39efdf21d47085a2c30" exitCode=143 Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.099688 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerDied","Data":"146d9ffc3b8e9ece5f9e2721869ea9a8b5e8fa909e02c39efdf21d47085a2c30"} Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.101500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerStarted","Data":"5d7561adefc09ad924f390a10ddf0c7e70cbd032cd8378b0257d651c9e687bc2"} Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.108755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" event={"ID":"9b26ee17-b21e-4bb3-8570-1de5435a6ea5","Type":"ContainerStarted","Data":"980b6d3c9aa70ca5f0d9dba5bac4c83b4432f391fb61104c63e528a463c1add5"} Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.116945 4754 generic.go:334] "Generic (PLEG): container finished" podID="773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" containerID="6bd610def9df2801ebca785fff696a92be9ede2f222011869f053647e34a32e8" exitCode=0 Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.117493 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" event={"ID":"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb","Type":"ContainerDied","Data":"6bd610def9df2801ebca785fff696a92be9ede2f222011869f053647e34a32e8"} Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.814077 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:44 crc kubenswrapper[4754]: I0105 20:29:44.898030 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.041525 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skdtq\" (UniqueName: \"kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq\") pod \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.041886 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts\") pod \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\" (UID: \"53d3e036-3e71-4b72-ae09-2d84f5b12b6d\") " Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.043209 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53d3e036-3e71-4b72-ae09-2d84f5b12b6d" (UID: "53d3e036-3e71-4b72-ae09-2d84f5b12b6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.071057 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq" (OuterVolumeSpecName: "kube-api-access-skdtq") pod "53d3e036-3e71-4b72-ae09-2d84f5b12b6d" (UID: "53d3e036-3e71-4b72-ae09-2d84f5b12b6d"). InnerVolumeSpecName "kube-api-access-skdtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.144998 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skdtq\" (UniqueName: \"kubernetes.io/projected/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-kube-api-access-skdtq\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.145048 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53d3e036-3e71-4b72-ae09-2d84f5b12b6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.157191 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gfzxd" event={"ID":"53d3e036-3e71-4b72-ae09-2d84f5b12b6d","Type":"ContainerDied","Data":"431c87c790bd70542340896b9c26b133f84ef57157a34419e9b5209c3574c89f"} Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.157240 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431c87c790bd70542340896b9c26b133f84ef57157a34419e9b5209c3574c89f" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.157321 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gfzxd" Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.165416 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kxzd4" event={"ID":"788477a1-9462-462e-ac96-5c6c5659437f","Type":"ContainerStarted","Data":"6b5af14fad83119e9c0018773181393821542facc28b738a4cf947389ecbb537"} Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.170193 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8f698c95-f22ns" event={"ID":"2febfb57-3d29-4415-bcae-1295fbf07b70","Type":"ContainerStarted","Data":"46db3f5816ddd8d47f16c656a6296c5a30e9683f46f8e14f690e56148d7eed95"} Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.171944 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h42n5" event={"ID":"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d","Type":"ContainerStarted","Data":"237b9316d8ded96c332e82ca2bfb4b4d9700ef9afa4d244d067646421744f163"} Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.174019 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9214-account-create-update-qwl6t" event={"ID":"8c40bae7-8986-432e-8050-cec73db2bfdd","Type":"ContainerStarted","Data":"ac4f460302920a2a9bfe93eb289d06afbaa85d0917354e6c18a50ecf68b06725"} Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.705394 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 20:29:45 crc kubenswrapper[4754]: I0105 20:29:45.827654 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.327517 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" event={"ID":"9b26ee17-b21e-4bb3-8570-1de5435a6ea5","Type":"ContainerStarted","Data":"fcfebfe7efc7db30ceebb0593d1a194b213ec8ffefb2bb05999f6d5c5a9d5f60"} Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.339743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerStarted","Data":"7f98a8df6d6dfe5241a822339b883bba91ebe8883d0b0a2102f24bb662753351"} Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.347285 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" podStartSLOduration=10.347261174 podStartE2EDuration="10.347261174s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:46.345910209 +0000 UTC m=+1473.055094083" watchObservedRunningTime="2026-01-05 20:29:46.347261174 +0000 UTC m=+1473.056445048" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.362135 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" event={"ID":"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb","Type":"ContainerDied","Data":"463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5"} Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.362180 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463297d035ec82856cd8cba7fd291008b3b79fa039d0d2428af2960b01e1d9b5" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.363475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff2c492b-de99-47c6-8cad-42a1427039f1","Type":"ContainerStarted","Data":"9ac53b5e387451c8f7890932adca3975deee343dc9de35b1f8770b39ba02df17"} Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.592580 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.697040 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2t8l\" (UniqueName: \"kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l\") pod \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.697357 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts\") pod \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\" (UID: \"773cb53b-3bb1-4b11-bb3d-c11c352fcdfb\") " Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.711186 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l" (OuterVolumeSpecName: "kube-api-access-h2t8l") pod "773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" (UID: "773cb53b-3bb1-4b11-bb3d-c11c352fcdfb"). InnerVolumeSpecName "kube-api-access-h2t8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.700513 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" (UID: "773cb53b-3bb1-4b11-bb3d-c11c352fcdfb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.802441 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:46 crc kubenswrapper[4754]: I0105 20:29:46.802476 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2t8l\" (UniqueName: \"kubernetes.io/projected/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb-kube-api-access-h2t8l\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.385986 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h42n5" event={"ID":"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d","Type":"ContainerStarted","Data":"2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.392232 4754 generic.go:334] "Generic (PLEG): container finished" podID="9b26ee17-b21e-4bb3-8570-1de5435a6ea5" containerID="fcfebfe7efc7db30ceebb0593d1a194b213ec8ffefb2bb05999f6d5c5a9d5f60" exitCode=0 Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.392336 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" event={"ID":"9b26ee17-b21e-4bb3-8570-1de5435a6ea5","Type":"ContainerDied","Data":"fcfebfe7efc7db30ceebb0593d1a194b213ec8ffefb2bb05999f6d5c5a9d5f60"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.400644 4754 generic.go:334] "Generic (PLEG): container finished" podID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerID="38fc911195a2fddb8faae4d68aa93e6c536d2633ca852d8558b90ab47cda31ac" exitCode=0 Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.400748 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerDied","Data":"38fc911195a2fddb8faae4d68aa93e6c536d2633ca852d8558b90ab47cda31ac"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.414518 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff2c492b-de99-47c6-8cad-42a1427039f1","Type":"ContainerStarted","Data":"66680c388d7bb2991c35062a1831492429933911e55cc4616b5b3547230a307e"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.427077 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9214-account-create-update-qwl6t" event={"ID":"8c40bae7-8986-432e-8050-cec73db2bfdd","Type":"ContainerStarted","Data":"7fddd300ab96ea5856577fb2f60617fa4f83f23e53ce97f9b86e8e18703eb154"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.429715 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-h42n5" podStartSLOduration=11.429697795 podStartE2EDuration="11.429697795s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:47.422990179 +0000 UTC m=+1474.132174053" watchObservedRunningTime="2026-01-05 20:29:47.429697795 +0000 UTC m=+1474.138881669" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.434396 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc975666b-pjcfg" event={"ID":"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed","Type":"ContainerStarted","Data":"8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.434549 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5bc975666b-pjcfg" podUID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" containerName="heat-api" containerID="cri-o://8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c" gracePeriod=60 Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.434645 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.453246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerStarted","Data":"e3613340ba344b24abe633054c3a99b123dea95ba3e13ab40e57eb2c003da79f"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.455339 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.482533 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c4fdffcf-f4rzg" event={"ID":"71c4958d-a956-4009-8f00-e4f16cab1b6b","Type":"ContainerStarted","Data":"48a63afe10e5a8359b2e8ba96a47e9a9f14391604f480250185359a67c52fe68"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.483400 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.493106 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" event={"ID":"215c4148-c55a-49d8-8b2e-301cc8912519","Type":"ContainerStarted","Data":"9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.493376 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" podUID="215c4148-c55a-49d8-8b2e-301cc8912519" containerName="heat-cfnapi" containerID="cri-o://9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53" gracePeriod=60 Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.493500 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.504918 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675955f8c4-92lhz" event={"ID":"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b","Type":"ContainerStarted","Data":"c921cb29d6ebb86cc0838144f569b5e2360c742f4671c3f11829371dd785c682"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.506034 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.517383 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kxzd4" event={"ID":"788477a1-9462-462e-ac96-5c6c5659437f","Type":"ContainerStarted","Data":"bc6c354af6970b2e09f9c0388dad3fd50f65999c5cdd9e4ef42ede3e87b8fdb2"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.520737 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5bc975666b-pjcfg" podStartSLOduration=12.828017996 podStartE2EDuration="19.520720239s" podCreationTimestamp="2026-01-05 20:29:28 +0000 UTC" firstStartedPulling="2026-01-05 20:29:39.489798716 +0000 UTC m=+1466.198982600" lastFinishedPulling="2026-01-05 20:29:46.182500959 +0000 UTC m=+1472.891684843" observedRunningTime="2026-01-05 20:29:47.47419004 +0000 UTC m=+1474.183373914" watchObservedRunningTime="2026-01-05 20:29:47.520720239 +0000 UTC m=+1474.229904113" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.528542 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c9ef-account-create-update-zq77j" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.530153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8f698c95-f22ns" event={"ID":"2febfb57-3d29-4415-bcae-1295fbf07b70","Type":"ContainerStarted","Data":"e32141db2e6f2b58809030fc2a6c6df284d83e88fe4957205ddaa2f2989569f6"} Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.530869 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.628024 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9214-account-create-update-qwl6t" podStartSLOduration=11.627978038 podStartE2EDuration="11.627978038s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:47.500765776 +0000 UTC m=+1474.209949650" watchObservedRunningTime="2026-01-05 20:29:47.627978038 +0000 UTC m=+1474.337161912" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.682617 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" podStartSLOduration=14.220705812 podStartE2EDuration="19.682568118s" podCreationTimestamp="2026-01-05 20:29:28 +0000 UTC" firstStartedPulling="2026-01-05 20:29:39.482592347 +0000 UTC m=+1466.191776221" lastFinishedPulling="2026-01-05 20:29:44.944454653 +0000 UTC m=+1471.653638527" observedRunningTime="2026-01-05 20:29:47.527991689 +0000 UTC m=+1474.237175563" watchObservedRunningTime="2026-01-05 20:29:47.682568118 +0000 UTC m=+1474.391751992" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.714550 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" podStartSLOduration=9.562039728 podStartE2EDuration="11.714514785s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="2026-01-05 20:29:44.03835088 +0000 UTC m=+1470.747534754" lastFinishedPulling="2026-01-05 20:29:46.190825937 +0000 UTC m=+1472.900009811" observedRunningTime="2026-01-05 20:29:47.564932947 +0000 UTC m=+1474.274116821" watchObservedRunningTime="2026-01-05 20:29:47.714514785 +0000 UTC m=+1474.423698659" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.746009 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5c4fdffcf-f4rzg" podStartSLOduration=7.705297117 podStartE2EDuration="11.745992429s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="2026-01-05 20:29:41.700649172 +0000 UTC m=+1468.409833046" lastFinishedPulling="2026-01-05 20:29:45.741344484 +0000 UTC m=+1472.450528358" observedRunningTime="2026-01-05 20:29:47.594133752 +0000 UTC m=+1474.303317626" watchObservedRunningTime="2026-01-05 20:29:47.745992429 +0000 UTC m=+1474.455176303" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.765552 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-kxzd4" podStartSLOduration=11.765534341 podStartE2EDuration="11.765534341s" podCreationTimestamp="2026-01-05 20:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:47.62691065 +0000 UTC m=+1474.336094514" watchObservedRunningTime="2026-01-05 20:29:47.765534341 +0000 UTC m=+1474.474718215" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.791828 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-675955f8c4-92lhz" podStartSLOduration=6.633737416 podStartE2EDuration="8.791804269s" podCreationTimestamp="2026-01-05 20:29:39 +0000 UTC" firstStartedPulling="2026-01-05 20:29:44.038382131 +0000 UTC m=+1470.747566005" lastFinishedPulling="2026-01-05 20:29:46.196448984 +0000 UTC m=+1472.905632858" observedRunningTime="2026-01-05 20:29:47.659667958 +0000 UTC m=+1474.368851832" watchObservedRunningTime="2026-01-05 20:29:47.791804269 +0000 UTC m=+1474.500988143" Jan 05 20:29:47 crc kubenswrapper[4754]: I0105 20:29:47.823859 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c8f698c95-f22ns" podStartSLOduration=7.759940682 podStartE2EDuration="9.823822878s" podCreationTimestamp="2026-01-05 20:29:38 +0000 UTC" firstStartedPulling="2026-01-05 20:29:44.120658076 +0000 UTC m=+1470.829841950" lastFinishedPulling="2026-01-05 20:29:46.184540272 +0000 UTC m=+1472.893724146" observedRunningTime="2026-01-05 20:29:47.680816852 +0000 UTC m=+1474.390000746" watchObservedRunningTime="2026-01-05 20:29:47.823822878 +0000 UTC m=+1474.533006752" Jan 05 20:29:47 crc kubenswrapper[4754]: E0105 20:29:47.994732 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1a8ee9_a244_4b25_acf5_4ab0de607c0d.slice/crio-2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b26ee17_b21e_4bb3_8570_1de5435a6ea5.slice/crio-conmon-fcfebfe7efc7db30ceebb0593d1a194b213ec8ffefb2bb05999f6d5c5a9d5f60.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1a8ee9_a244_4b25_acf5_4ab0de607c0d.slice/crio-conmon-2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c40bae7_8986_432e_8050_cec73db2bfdd.slice/crio-7fddd300ab96ea5856577fb2f60617fa4f83f23e53ce97f9b86e8e18703eb154.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.077189 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.109695 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.109749 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.251453 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.251647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.251884 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.259751 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.292187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts" (OuterVolumeSpecName: "scripts") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.304859 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.305135 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.306221 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.306301 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.306351 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvcn5\" (UniqueName: \"kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5\") pod \"d877e0f6-019c-4b97-8a36-9a210b9e4233\" (UID: \"d877e0f6-019c-4b97-8a36-9a210b9e4233\") " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.308723 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs" (OuterVolumeSpecName: "logs") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.310717 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.310782 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d877e0f6-019c-4b97-8a36-9a210b9e4233-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.310799 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.331937 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5" (OuterVolumeSpecName: "kube-api-access-rvcn5") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "kube-api-access-rvcn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.357417 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.412427 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.412470 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvcn5\" (UniqueName: \"kubernetes.io/projected/d877e0f6-019c-4b97-8a36-9a210b9e4233-kube-api-access-rvcn5\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.434279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357" (OuterVolumeSpecName: "glance") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.444439 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data" (OuterVolumeSpecName: "config-data") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.467882 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d877e0f6-019c-4b97-8a36-9a210b9e4233" (UID: "d877e0f6-019c-4b97-8a36-9a210b9e4233"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.514608 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") on node \"crc\" " Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.514644 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.514656 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d877e0f6-019c-4b97-8a36-9a210b9e4233-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.550505 4754 generic.go:334] "Generic (PLEG): container finished" podID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerID="e3613340ba344b24abe633054c3a99b123dea95ba3e13ab40e57eb2c003da79f" exitCode=1 Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.550575 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerDied","Data":"e3613340ba344b24abe633054c3a99b123dea95ba3e13ab40e57eb2c003da79f"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.551413 4754 scope.go:117] "RemoveContainer" containerID="e3613340ba344b24abe633054c3a99b123dea95ba3e13ab40e57eb2c003da79f" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.560137 4754 generic.go:334] "Generic (PLEG): container finished" podID="788477a1-9462-462e-ac96-5c6c5659437f" containerID="bc6c354af6970b2e09f9c0388dad3fd50f65999c5cdd9e4ef42ede3e87b8fdb2" exitCode=0 Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.560190 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kxzd4" event={"ID":"788477a1-9462-462e-ac96-5c6c5659437f","Type":"ContainerDied","Data":"bc6c354af6970b2e09f9c0388dad3fd50f65999c5cdd9e4ef42ede3e87b8fdb2"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.581502 4754 generic.go:334] "Generic (PLEG): container finished" podID="8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" containerID="2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08" exitCode=0 Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.581575 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h42n5" event={"ID":"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d","Type":"ContainerDied","Data":"2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.638535 4754 generic.go:334] "Generic (PLEG): container finished" podID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerID="48a63afe10e5a8359b2e8ba96a47e9a9f14391604f480250185359a67c52fe68" exitCode=1 Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.638575 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.638634 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c4fdffcf-f4rzg" event={"ID":"71c4958d-a956-4009-8f00-e4f16cab1b6b","Type":"ContainerDied","Data":"48a63afe10e5a8359b2e8ba96a47e9a9f14391604f480250185359a67c52fe68"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.638722 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357") on node "crc" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.639496 4754 scope.go:117] "RemoveContainer" containerID="48a63afe10e5a8359b2e8ba96a47e9a9f14391604f480250185359a67c52fe68" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.651623 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerStarted","Data":"c7dc8fc0c1a6c03f32fbf561094de694eced70fd5e553e8dbe66e460b8f362e4"} Jan 05 20:29:48 crc kubenswrapper[4754]: E0105 20:29:48.659899 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff2c492b_de99_47c6_8cad_42a1427039f1.slice/crio-f7082f0bee395878f2a92eadd2af3ee5881214eeb7a8dda8a60a5ae0dd3028c4.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.730272 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.764825 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d877e0f6-019c-4b97-8a36-9a210b9e4233","Type":"ContainerDied","Data":"c5d5c7325432a6f169d2bf791b50ab76eb1dc49ed7101ecf03014624e6b68dbe"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.764909 4754 scope.go:117] "RemoveContainer" containerID="38fc911195a2fddb8faae4d68aa93e6c536d2633ca852d8558b90ab47cda31ac" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.765123 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.775517 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c40bae7-8986-432e-8050-cec73db2bfdd" containerID="7fddd300ab96ea5856577fb2f60617fa4f83f23e53ce97f9b86e8e18703eb154" exitCode=0 Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.775658 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9214-account-create-update-qwl6t" event={"ID":"8c40bae7-8986-432e-8050-cec73db2bfdd","Type":"ContainerDied","Data":"7fddd300ab96ea5856577fb2f60617fa4f83f23e53ce97f9b86e8e18703eb154"} Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.902360 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.927166 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.982958 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:48 crc kubenswrapper[4754]: E0105 20:29:48.983876 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d3e036-3e71-4b72-ae09-2d84f5b12b6d" containerName="mariadb-database-create" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.983894 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d3e036-3e71-4b72-ae09-2d84f5b12b6d" containerName="mariadb-database-create" Jan 05 20:29:48 crc kubenswrapper[4754]: E0105 20:29:48.983936 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-log" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.983943 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-log" Jan 05 20:29:48 crc kubenswrapper[4754]: E0105 20:29:48.983952 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" containerName="mariadb-account-create-update" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.983959 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" containerName="mariadb-account-create-update" Jan 05 20:29:48 crc kubenswrapper[4754]: E0105 20:29:48.983982 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-httpd" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.983988 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-httpd" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.984276 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" containerName="mariadb-account-create-update" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.984307 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d3e036-3e71-4b72-ae09-2d84f5b12b6d" containerName="mariadb-database-create" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.984329 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-log" Jan 05 20:29:48 crc kubenswrapper[4754]: I0105 20:29:48.984345 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" containerName="glance-httpd" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.011525 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.012036 4754 scope.go:117] "RemoveContainer" containerID="146d9ffc3b8e9ece5f9e2721869ea9a8b5e8fa909e02c39efdf21d47085a2c30" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.022157 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.028030 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.028894 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.087445 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7kl\" (UniqueName: \"kubernetes.io/projected/651e1d87-a791-4aab-92b8-68aae7da2a91-kube-api-access-ck7kl\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.087965 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088022 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088103 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-logs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088313 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088349 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.088405 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.193940 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.193996 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194036 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194097 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck7kl\" (UniqueName: \"kubernetes.io/projected/651e1d87-a791-4aab-92b8-68aae7da2a91-kube-api-access-ck7kl\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194131 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194161 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194203 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-logs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194244 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.194921 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.204146 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/651e1d87-a791-4aab-92b8-68aae7da2a91-logs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.207007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.221645 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.229885 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.233827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651e1d87-a791-4aab-92b8-68aae7da2a91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.258779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck7kl\" (UniqueName: \"kubernetes.io/projected/651e1d87-a791-4aab-92b8-68aae7da2a91-kube-api-access-ck7kl\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.348066 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.381028 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.381066 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0601c73c2da0209a9fcf070a1dc03f94eea8e72ac7d4d13dd6d1bb747292aec9/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.516462 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.517025 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="dnsmasq-dns" containerID="cri-o://6387ce543464fc9a38093c2bfa92d61efb626b513cd874b141d1b49161e82863" gracePeriod=10 Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.598663 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-084c4888-b72a-436d-98f1-8bfa7cd3c357\") pod \"glance-default-internal-api-0\" (UID: \"651e1d87-a791-4aab-92b8-68aae7da2a91\") " pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.643155 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.670039 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d877e0f6-019c-4b97-8a36-9a210b9e4233" path="/var/lib/kubelet/pods/d877e0f6-019c-4b97-8a36-9a210b9e4233/volumes" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.691412 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.698373 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.762704 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts\") pod \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.762812 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpg5c\" (UniqueName: \"kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c\") pod \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\" (UID: \"9b26ee17-b21e-4bb3-8570-1de5435a6ea5\") " Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.769490 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b26ee17-b21e-4bb3-8570-1de5435a6ea5" (UID: "9b26ee17-b21e-4bb3-8570-1de5435a6ea5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.774518 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c" (OuterVolumeSpecName: "kube-api-access-rpg5c") pod "9b26ee17-b21e-4bb3-8570-1de5435a6ea5" (UID: "9b26ee17-b21e-4bb3-8570-1de5435a6ea5"). InnerVolumeSpecName "kube-api-access-rpg5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.807821 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.808197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe16-account-create-update-k6rtd" event={"ID":"9b26ee17-b21e-4bb3-8570-1de5435a6ea5","Type":"ContainerDied","Data":"980b6d3c9aa70ca5f0d9dba5bac4c83b4432f391fb61104c63e528a463c1add5"} Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.808467 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980b6d3c9aa70ca5f0d9dba5bac4c83b4432f391fb61104c63e528a463c1add5" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.867505 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpg5c\" (UniqueName: \"kubernetes.io/projected/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-kube-api-access-rpg5c\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:49 crc kubenswrapper[4754]: I0105 20:29:49.867882 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b26ee17-b21e-4bb3-8570-1de5435a6ea5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.864655 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerStarted","Data":"6f31aeb72ff4d2dd058bf591a57836d09d50cc88d87087bfcc95ab1414ab401d"} Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.875428 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff2c492b-de99-47c6-8cad-42a1427039f1","Type":"ContainerStarted","Data":"f7082f0bee395878f2a92eadd2af3ee5881214eeb7a8dda8a60a5ae0dd3028c4"} Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.892897 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerStarted","Data":"c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa"} Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.894932 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.905675 4754 generic.go:334] "Generic (PLEG): container finished" podID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerID="6387ce543464fc9a38093c2bfa92d61efb626b513cd874b141d1b49161e82863" exitCode=0 Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.905758 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" event={"ID":"7b0677b2-c5d5-4a04-9f26-52aa89506809","Type":"ContainerDied","Data":"6387ce543464fc9a38093c2bfa92d61efb626b513cd874b141d1b49161e82863"} Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.937521 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.937484079 podStartE2EDuration="9.937484079s" podCreationTimestamp="2026-01-05 20:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:50.904945937 +0000 UTC m=+1477.614129811" watchObservedRunningTime="2026-01-05 20:29:50.937484079 +0000 UTC m=+1477.646667953" Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.951427 4754 generic.go:334] "Generic (PLEG): container finished" podID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerID="97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7" exitCode=1 Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.951470 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c4fdffcf-f4rzg" event={"ID":"71c4958d-a956-4009-8f00-e4f16cab1b6b","Type":"ContainerDied","Data":"97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7"} Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.951517 4754 scope.go:117] "RemoveContainer" containerID="48a63afe10e5a8359b2e8ba96a47e9a9f14391604f480250185359a67c52fe68" Jan 05 20:29:50 crc kubenswrapper[4754]: I0105 20:29:50.952475 4754 scope.go:117] "RemoveContainer" containerID="97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7" Jan 05 20:29:50 crc kubenswrapper[4754]: E0105 20:29:50.952746 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c4fdffcf-f4rzg_openstack(71c4958d-a956-4009-8f00-e4f16cab1b6b)\"" pod="openstack/heat-api-5c4fdffcf-f4rzg" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.333883 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.386398 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.418850 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.476950 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.489330 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts\") pod \"8c40bae7-8986-432e-8050-cec73db2bfdd\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.489470 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z65k9\" (UniqueName: \"kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9\") pod \"8c40bae7-8986-432e-8050-cec73db2bfdd\" (UID: \"8c40bae7-8986-432e-8050-cec73db2bfdd\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.489562 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts\") pod \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.489598 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxvv\" (UniqueName: \"kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv\") pod \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\" (UID: \"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.490857 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" (UID: "8b1a8ee9-a244-4b25-acf5-4ab0de607c0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.491217 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c40bae7-8986-432e-8050-cec73db2bfdd" (UID: "8c40bae7-8986-432e-8050-cec73db2bfdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.543887 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9" (OuterVolumeSpecName: "kube-api-access-z65k9") pod "8c40bae7-8986-432e-8050-cec73db2bfdd" (UID: "8c40bae7-8986-432e-8050-cec73db2bfdd"). InnerVolumeSpecName "kube-api-access-z65k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.543955 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv" (OuterVolumeSpecName: "kube-api-access-dxxvv") pod "8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" (UID: "8b1a8ee9-a244-4b25-acf5-4ab0de607c0d"). InnerVolumeSpecName "kube-api-access-dxxvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.594442 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpkn8\" (UniqueName: \"kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8\") pod \"788477a1-9462-462e-ac96-5c6c5659437f\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.594943 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts\") pod \"788477a1-9462-462e-ac96-5c6c5659437f\" (UID: \"788477a1-9462-462e-ac96-5c6c5659437f\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.595644 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "788477a1-9462-462e-ac96-5c6c5659437f" (UID: "788477a1-9462-462e-ac96-5c6c5659437f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.596113 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/788477a1-9462-462e-ac96-5c6c5659437f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.596205 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c40bae7-8986-432e-8050-cec73db2bfdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.596222 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z65k9\" (UniqueName: \"kubernetes.io/projected/8c40bae7-8986-432e-8050-cec73db2bfdd-kube-api-access-z65k9\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.596237 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.596246 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxxvv\" (UniqueName: \"kubernetes.io/projected/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d-kube-api-access-dxxvv\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.598617 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8" (OuterVolumeSpecName: "kube-api-access-wpkn8") pod "788477a1-9462-462e-ac96-5c6c5659437f" (UID: "788477a1-9462-462e-ac96-5c6c5659437f"). InnerVolumeSpecName "kube-api-access-wpkn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.701033 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpkn8\" (UniqueName: \"kubernetes.io/projected/788477a1-9462-462e-ac96-5c6c5659437f-kube-api-access-wpkn8\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.763192 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.905598 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.905677 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l72x\" (UniqueName: \"kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.905738 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.905931 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.906084 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.906166 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb\") pod \"7b0677b2-c5d5-4a04-9f26-52aa89506809\" (UID: \"7b0677b2-c5d5-4a04-9f26-52aa89506809\") " Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.912401 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x" (OuterVolumeSpecName: "kube-api-access-7l72x") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "kube-api-access-7l72x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.976924 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kxzd4" event={"ID":"788477a1-9462-462e-ac96-5c6c5659437f","Type":"ContainerDied","Data":"6b5af14fad83119e9c0018773181393821542facc28b738a4cf947389ecbb537"} Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.976964 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5af14fad83119e9c0018773181393821542facc28b738a4cf947389ecbb537" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.977026 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kxzd4" Jan 05 20:29:51 crc kubenswrapper[4754]: I0105 20:29:51.981052 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"651e1d87-a791-4aab-92b8-68aae7da2a91","Type":"ContainerStarted","Data":"dccc91442dcbf345aa7b67eaa4f76be1c67029f5b9ca7007017905287cff4134"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.001506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h42n5" event={"ID":"8b1a8ee9-a244-4b25-acf5-4ab0de607c0d","Type":"ContainerDied","Data":"237b9316d8ded96c332e82ca2bfb4b4d9700ef9afa4d244d067646421744f163"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.002030 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237b9316d8ded96c332e82ca2bfb4b4d9700ef9afa4d244d067646421744f163" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.002151 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h42n5" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.007065 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.008097 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.008668 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-wwjkm" event={"ID":"7b0677b2-c5d5-4a04-9f26-52aa89506809","Type":"ContainerDied","Data":"dda609464664dcc76b2df134bf99d6e3da8d564f47731fd5f2e43da89a3f1981"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.008707 4754 scope.go:117] "RemoveContainer" containerID="6387ce543464fc9a38093c2bfa92d61efb626b513cd874b141d1b49161e82863" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.009183 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.009208 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l72x\" (UniqueName: \"kubernetes.io/projected/7b0677b2-c5d5-4a04-9f26-52aa89506809-kube-api-access-7l72x\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.012413 4754 scope.go:117] "RemoveContainer" containerID="97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7" Jan 05 20:29:52 crc kubenswrapper[4754]: E0105 20:29:52.013240 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c4fdffcf-f4rzg_openstack(71c4958d-a956-4009-8f00-e4f16cab1b6b)\"" pod="openstack/heat-api-5c4fdffcf-f4rzg" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.014953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerStarted","Data":"6f7ce2859d7a45642fb669f936fc6625d5065865d723cdb5a81a3e9983abc458"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.017576 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.030958 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.031027 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.031329 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9214-account-create-update-qwl6t" event={"ID":"8c40bae7-8986-432e-8050-cec73db2bfdd","Type":"ContainerDied","Data":"ac4f460302920a2a9bfe93eb289d06afbaa85d0917354e6c18a50ecf68b06725"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.031368 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4f460302920a2a9bfe93eb289d06afbaa85d0917354e6c18a50ecf68b06725" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.031507 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9214-account-create-update-qwl6t" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.041591 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config" (OuterVolumeSpecName: "config") pod "7b0677b2-c5d5-4a04-9f26-52aa89506809" (UID: "7b0677b2-c5d5-4a04-9f26-52aa89506809"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.059681 4754 generic.go:334] "Generic (PLEG): container finished" podID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerID="c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa" exitCode=1 Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.061206 4754 scope.go:117] "RemoveContainer" containerID="c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa" Jan 05 20:29:52 crc kubenswrapper[4754]: E0105 20:29:52.061465 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-679bf7799d-5zzsg_openstack(4a1b633c-0572-43e3-ab65-9ecf41447261)\"" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.061685 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerDied","Data":"c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa"} Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.075371 4754 scope.go:117] "RemoveContainer" containerID="a9feb030b3639beeaa4dc42d0f8da3f0baf135bd2be117acdb1421fe661b2bf2" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.112249 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.112301 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.112316 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.112326 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b0677b2-c5d5-4a04-9f26-52aa89506809-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.114606 4754 scope.go:117] "RemoveContainer" containerID="e3613340ba344b24abe633054c3a99b123dea95ba3e13ab40e57eb2c003da79f" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.199802 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.199850 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.245189 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.387372 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.394961 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-wwjkm"] Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.723727 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.723789 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.783626 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 20:29:52 crc kubenswrapper[4754]: I0105 20:29:52.810087 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.072176 4754 scope.go:117] "RemoveContainer" containerID="c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa" Jan 05 20:29:53 crc kubenswrapper[4754]: E0105 20:29:53.072831 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-679bf7799d-5zzsg_openstack(4a1b633c-0572-43e3-ab65-9ecf41447261)\"" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.073598 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"651e1d87-a791-4aab-92b8-68aae7da2a91","Type":"ContainerStarted","Data":"52d2c7a2a88ae2fda53e867ca33aa83507d377c7488d21cd259e8ce68ef79003"} Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.073629 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"651e1d87-a791-4aab-92b8-68aae7da2a91","Type":"ContainerStarted","Data":"ad1cac56cc9fd3f6a3be7b666a9af3006f0c473b7bf06e122d90621cfe01b769"} Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.077900 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerStarted","Data":"839690ef5ca602be9b0825e6f5d25bc1b5bb3dcf102063cdfde215f70f5e449f"} Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078216 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-central-agent" containerID="cri-o://c7dc8fc0c1a6c03f32fbf561094de694eced70fd5e553e8dbe66e460b8f362e4" gracePeriod=30 Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078310 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-notification-agent" containerID="cri-o://6f31aeb72ff4d2dd058bf591a57836d09d50cc88d87087bfcc95ab1414ab401d" gracePeriod=30 Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078306 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="sg-core" containerID="cri-o://6f7ce2859d7a45642fb669f936fc6625d5065865d723cdb5a81a3e9983abc458" gracePeriod=30 Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078358 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="proxy-httpd" containerID="cri-o://839690ef5ca602be9b0825e6f5d25bc1b5bb3dcf102063cdfde215f70f5e449f" gracePeriod=30 Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078249 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078394 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.078565 4754 scope.go:117] "RemoveContainer" containerID="97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7" Jan 05 20:29:53 crc kubenswrapper[4754]: E0105 20:29:53.078841 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c4fdffcf-f4rzg_openstack(71c4958d-a956-4009-8f00-e4f16cab1b6b)\"" pod="openstack/heat-api-5c4fdffcf-f4rzg" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.133723 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.133702042 podStartE2EDuration="5.133702042s" podCreationTimestamp="2026-01-05 20:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:29:53.127309864 +0000 UTC m=+1479.836493738" watchObservedRunningTime="2026-01-05 20:29:53.133702042 +0000 UTC m=+1479.842885916" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.157080 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.566570367 podStartE2EDuration="11.157061633s" podCreationTimestamp="2026-01-05 20:29:42 +0000 UTC" firstStartedPulling="2026-01-05 20:29:46.193668231 +0000 UTC m=+1472.902852105" lastFinishedPulling="2026-01-05 20:29:52.784159507 +0000 UTC m=+1479.493343371" observedRunningTime="2026-01-05 20:29:53.148781737 +0000 UTC m=+1479.857965611" watchObservedRunningTime="2026-01-05 20:29:53.157061633 +0000 UTC m=+1479.866245517" Jan 05 20:29:53 crc kubenswrapper[4754]: I0105 20:29:53.603444 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" path="/var/lib/kubelet/pods/7b0677b2-c5d5-4a04-9f26-52aa89506809/volumes" Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.092932 4754 generic.go:334] "Generic (PLEG): container finished" podID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerID="839690ef5ca602be9b0825e6f5d25bc1b5bb3dcf102063cdfde215f70f5e449f" exitCode=0 Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.092962 4754 generic.go:334] "Generic (PLEG): container finished" podID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerID="6f7ce2859d7a45642fb669f936fc6625d5065865d723cdb5a81a3e9983abc458" exitCode=2 Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.092969 4754 generic.go:334] "Generic (PLEG): container finished" podID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerID="6f31aeb72ff4d2dd058bf591a57836d09d50cc88d87087bfcc95ab1414ab401d" exitCode=0 Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.093038 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerDied","Data":"839690ef5ca602be9b0825e6f5d25bc1b5bb3dcf102063cdfde215f70f5e449f"} Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.093102 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerDied","Data":"6f7ce2859d7a45642fb669f936fc6625d5065865d723cdb5a81a3e9983abc458"} Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.093125 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerDied","Data":"6f31aeb72ff4d2dd058bf591a57836d09d50cc88d87087bfcc95ab1414ab401d"} Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.094874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"91975736-d23e-479b-bd43-68b9b1b3e450","Type":"ContainerStarted","Data":"859f5cac75d84ff106cab8c5b7aa8a7724703664edcdef40612043bf5324fd7f"} Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.095458 4754 scope.go:117] "RemoveContainer" containerID="c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa" Jan 05 20:29:54 crc kubenswrapper[4754]: E0105 20:29:54.095816 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-679bf7799d-5zzsg_openstack(4a1b633c-0572-43e3-ab65-9ecf41447261)\"" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" Jan 05 20:29:54 crc kubenswrapper[4754]: I0105 20:29:54.130460 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.263287229 podStartE2EDuration="33.130436118s" podCreationTimestamp="2026-01-05 20:29:21 +0000 UTC" firstStartedPulling="2026-01-05 20:29:22.229082691 +0000 UTC m=+1448.938266565" lastFinishedPulling="2026-01-05 20:29:53.09623158 +0000 UTC m=+1479.805415454" observedRunningTime="2026-01-05 20:29:54.112683403 +0000 UTC m=+1480.821867327" watchObservedRunningTime="2026-01-05 20:29:54.130436118 +0000 UTC m=+1480.839619992" Jan 05 20:29:55 crc kubenswrapper[4754]: I0105 20:29:55.831728 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:29:55 crc kubenswrapper[4754]: I0105 20:29:55.934223 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.098020 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.102702 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.286405 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.416771 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.473555 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.555612 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.708463 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.763925 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwk7q"] Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764361 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764372 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764396 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764402 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764417 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b26ee17-b21e-4bb3-8570-1de5435a6ea5" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764423 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b26ee17-b21e-4bb3-8570-1de5435a6ea5" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764434 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="dnsmasq-dns" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764440 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="dnsmasq-dns" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764449 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764455 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764474 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="init" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764479 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="init" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764493 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c40bae7-8986-432e-8050-cec73db2bfdd" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764499 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c40bae7-8986-432e-8050-cec73db2bfdd" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: E0105 20:29:56.764524 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788477a1-9462-462e-ac96-5c6c5659437f" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764530 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="788477a1-9462-462e-ac96-5c6c5659437f" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764701 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="788477a1-9462-462e-ac96-5c6c5659437f" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764712 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764722 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c40bae7-8986-432e-8050-cec73db2bfdd" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764733 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b26ee17-b21e-4bb3-8570-1de5435a6ea5" containerName="mariadb-account-create-update" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764755 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" containerName="heat-api" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764768 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0677b2-c5d5-4a04-9f26-52aa89506809" containerName="dnsmasq-dns" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.764775 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" containerName="mariadb-database-create" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.772853 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.776726 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rmk5w" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.776917 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.777030 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.796248 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwk7q"] Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842309 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data\") pod \"71c4958d-a956-4009-8f00-e4f16cab1b6b\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842363 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom\") pod \"71c4958d-a956-4009-8f00-e4f16cab1b6b\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842411 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xdz\" (UniqueName: \"kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz\") pod \"71c4958d-a956-4009-8f00-e4f16cab1b6b\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842436 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle\") pod \"71c4958d-a956-4009-8f00-e4f16cab1b6b\" (UID: \"71c4958d-a956-4009-8f00-e4f16cab1b6b\") " Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842733 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842801 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.842918 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74q4c\" (UniqueName: \"kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.850267 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz" (OuterVolumeSpecName: "kube-api-access-d2xdz") pod "71c4958d-a956-4009-8f00-e4f16cab1b6b" (UID: "71c4958d-a956-4009-8f00-e4f16cab1b6b"). InnerVolumeSpecName "kube-api-access-d2xdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.867370 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71c4958d-a956-4009-8f00-e4f16cab1b6b" (UID: "71c4958d-a956-4009-8f00-e4f16cab1b6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.923445 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data" (OuterVolumeSpecName: "config-data") pod "71c4958d-a956-4009-8f00-e4f16cab1b6b" (UID: "71c4958d-a956-4009-8f00-e4f16cab1b6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944583 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74q4c\" (UniqueName: \"kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944696 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944754 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944785 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944897 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944908 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.944918 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2xdz\" (UniqueName: \"kubernetes.io/projected/71c4958d-a956-4009-8f00-e4f16cab1b6b-kube-api-access-d2xdz\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.945415 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71c4958d-a956-4009-8f00-e4f16cab1b6b" (UID: "71c4958d-a956-4009-8f00-e4f16cab1b6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.952046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.958810 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.967244 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:56 crc kubenswrapper[4754]: I0105 20:29:56.970183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74q4c\" (UniqueName: \"kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c\") pod \"nova-cell0-conductor-db-sync-gwk7q\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.047184 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c4958d-a956-4009-8f00-e4f16cab1b6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.096857 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.113465 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.243500 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.245875 4754 generic.go:334] "Generic (PLEG): container finished" podID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerID="c7dc8fc0c1a6c03f32fbf561094de694eced70fd5e553e8dbe66e460b8f362e4" exitCode=0 Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.245963 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerDied","Data":"c7dc8fc0c1a6c03f32fbf561094de694eced70fd5e553e8dbe66e460b8f362e4"} Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.248325 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" event={"ID":"4a1b633c-0572-43e3-ab65-9ecf41447261","Type":"ContainerDied","Data":"5d7561adefc09ad924f390a10ddf0c7e70cbd032cd8378b0257d651c9e687bc2"} Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.248360 4754 scope.go:117] "RemoveContainer" containerID="c68602efeff905335e6a1e3ddd581dd939a34b71748c44244f1047aa1b43b3fa" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.248463 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-679bf7799d-5zzsg" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.253369 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c4fdffcf-f4rzg" event={"ID":"71c4958d-a956-4009-8f00-e4f16cab1b6b","Type":"ContainerDied","Data":"6c65b7e203f23ae639339f8b26f2ab30f95ace012804727ef306b0e341f44904"} Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.253468 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c4fdffcf-f4rzg" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.278693 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7q48\" (UniqueName: \"kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48\") pod \"4a1b633c-0572-43e3-ab65-9ecf41447261\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.278807 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom\") pod \"4a1b633c-0572-43e3-ab65-9ecf41447261\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.278927 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data\") pod \"4a1b633c-0572-43e3-ab65-9ecf41447261\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.278956 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle\") pod \"4a1b633c-0572-43e3-ab65-9ecf41447261\" (UID: \"4a1b633c-0572-43e3-ab65-9ecf41447261\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.293806 4754 scope.go:117] "RemoveContainer" containerID="97e19229705f1c1725d7f1d55362ef865f8ec7d1a57daa00a95bc2d4988f61a7" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.300168 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a1b633c-0572-43e3-ab65-9ecf41447261" (UID: "4a1b633c-0572-43e3-ab65-9ecf41447261"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.324980 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.325260 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-cfdcfcf78-fd4fw" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" containerID="cri-o://9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" gracePeriod=60 Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.337348 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48" (OuterVolumeSpecName: "kube-api-access-j7q48") pod "4a1b633c-0572-43e3-ab65-9ecf41447261" (UID: "4a1b633c-0572-43e3-ab65-9ecf41447261"). InnerVolumeSpecName "kube-api-access-j7q48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: E0105 20:29:57.353700 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:57 crc kubenswrapper[4754]: E0105 20:29:57.366278 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:57 crc kubenswrapper[4754]: E0105 20:29:57.368164 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:57 crc kubenswrapper[4754]: E0105 20:29:57.368246 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cfdcfcf78-fd4fw" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.376130 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1b633c-0572-43e3-ab65-9ecf41447261" (UID: "4a1b633c-0572-43e3-ab65-9ecf41447261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.384400 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.384430 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7q48\" (UniqueName: \"kubernetes.io/projected/4a1b633c-0572-43e3-ab65-9ecf41447261-kube-api-access-j7q48\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.384442 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.391958 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.407645 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5c4fdffcf-f4rzg"] Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.417129 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data" (OuterVolumeSpecName: "config-data") pod "4a1b633c-0572-43e3-ab65-9ecf41447261" (UID: "4a1b633c-0572-43e3-ab65-9ecf41447261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.419304 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.488376 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1b633c-0572-43e3-ab65-9ecf41447261-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.590182 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592010 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8gtr\" (UniqueName: \"kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592168 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592317 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592382 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.592420 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml\") pod \"21c34859-60d5-47ab-93ea-b4837ddc28f1\" (UID: \"21c34859-60d5-47ab-93ea-b4837ddc28f1\") " Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.594238 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.595677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.600781 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts" (OuterVolumeSpecName: "scripts") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.601773 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.601795 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c34859-60d5-47ab-93ea-b4837ddc28f1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.601804 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.604086 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c4958d-a956-4009-8f00-e4f16cab1b6b" path="/var/lib/kubelet/pods/71c4958d-a956-4009-8f00-e4f16cab1b6b/volumes" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.612338 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr" (OuterVolumeSpecName: "kube-api-access-s8gtr") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "kube-api-access-s8gtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.710172 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8gtr\" (UniqueName: \"kubernetes.io/projected/21c34859-60d5-47ab-93ea-b4837ddc28f1-kube-api-access-s8gtr\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.720122 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.752847 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data" (OuterVolumeSpecName: "config-data") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.801854 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21c34859-60d5-47ab-93ea-b4837ddc28f1" (UID: "21c34859-60d5-47ab-93ea-b4837ddc28f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.813820 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.813849 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.813861 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c34859-60d5-47ab-93ea-b4837ddc28f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.855258 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwk7q"] Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.872873 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:57 crc kubenswrapper[4754]: I0105 20:29:57.890416 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-679bf7799d-5zzsg"] Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.308168 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" event={"ID":"1192f0c3-2df8-44ea-a767-937f965b46f3","Type":"ContainerStarted","Data":"83517df70a8738c069275f4ad270f59b3c1b6599519f3fdf002ba8e67f85a925"} Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.351177 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c34859-60d5-47ab-93ea-b4837ddc28f1","Type":"ContainerDied","Data":"7f98a8df6d6dfe5241a822339b883bba91ebe8883d0b0a2102f24bb662753351"} Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.351224 4754 scope.go:117] "RemoveContainer" containerID="839690ef5ca602be9b0825e6f5d25bc1b5bb3dcf102063cdfde215f70f5e449f" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.351330 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.410014 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.413494 4754 scope.go:117] "RemoveContainer" containerID="6f7ce2859d7a45642fb669f936fc6625d5065865d723cdb5a81a3e9983abc458" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.423581 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.462867 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463382 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463398 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463417 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="proxy-httpd" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463424 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="proxy-httpd" Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463438 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-notification-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463446 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-notification-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463471 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="sg-core" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463477 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="sg-core" Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463493 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-central-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463499 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-central-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463684 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-notification-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463702 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463715 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463729 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="ceilometer-central-agent" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463737 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="sg-core" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463752 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" containerName="proxy-httpd" Jan 05 20:29:58 crc kubenswrapper[4754]: E0105 20:29:58.463944 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.463952 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" containerName="heat-cfnapi" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.465817 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.469035 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.470202 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.478322 4754 scope.go:117] "RemoveContainer" containerID="6f31aeb72ff4d2dd058bf591a57836d09d50cc88d87087bfcc95ab1414ab401d" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.489998 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.552023 4754 scope.go:117] "RemoveContainer" containerID="c7dc8fc0c1a6c03f32fbf561094de694eced70fd5e553e8dbe66e460b8f362e4" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634453 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59gh\" (UniqueName: \"kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634614 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634652 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.634896 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.635111 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.737743 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738438 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59gh\" (UniqueName: \"kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738589 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738617 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738640 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738683 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.738706 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.741959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.742463 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.747395 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.750987 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.754979 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.755913 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.759037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59gh\" (UniqueName: \"kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh\") pod \"ceilometer-0\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " pod="openstack/ceilometer-0" Jan 05 20:29:58 crc kubenswrapper[4754]: I0105 20:29:58.791620 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:29:59 crc kubenswrapper[4754]: E0105 20:29:59.218566 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:59 crc kubenswrapper[4754]: E0105 20:29:59.220095 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:59 crc kubenswrapper[4754]: E0105 20:29:59.221969 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:29:59 crc kubenswrapper[4754]: E0105 20:29:59.222005 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cfdcfcf78-fd4fw" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.466657 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:29:59 crc kubenswrapper[4754]: W0105 20:29:59.468269 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice/crio-53245616af107a8bd083b72874fc4233a09f9f26c81d09c64c71dc50283164fc WatchSource:0}: Error finding container 53245616af107a8bd083b72874fc4233a09f9f26c81d09c64c71dc50283164fc: Status 404 returned error can't find the container with id 53245616af107a8bd083b72874fc4233a09f9f26c81d09c64c71dc50283164fc Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.616009 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c34859-60d5-47ab-93ea-b4837ddc28f1" path="/var/lib/kubelet/pods/21c34859-60d5-47ab-93ea-b4837ddc28f1/volumes" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.617716 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1b633c-0572-43e3-ab65-9ecf41447261" path="/var/lib/kubelet/pods/4a1b633c-0572-43e3-ab65-9ecf41447261/volumes" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.649326 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.649376 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.718370 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 20:29:59 crc kubenswrapper[4754]: I0105 20:29:59.724764 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.207302 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg"] Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.209400 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.212969 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.223136 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.223469 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg"] Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.380844 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.380926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28zff\" (UniqueName: \"kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.381204 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.459856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerStarted","Data":"53245616af107a8bd083b72874fc4233a09f9f26c81d09c64c71dc50283164fc"} Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.460232 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.461319 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.484201 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.484251 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.484326 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28zff\" (UniqueName: \"kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.486144 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.490044 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.507219 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28zff\" (UniqueName: \"kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff\") pod \"collect-profiles-29460750-bxcpg\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:00 crc kubenswrapper[4754]: I0105 20:30:00.539416 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:01 crc kubenswrapper[4754]: I0105 20:30:01.301007 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg"] Jan 05 20:30:01 crc kubenswrapper[4754]: I0105 20:30:01.477121 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerStarted","Data":"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a"} Jan 05 20:30:01 crc kubenswrapper[4754]: I0105 20:30:01.479584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" event={"ID":"623e75ec-7533-4b8c-8869-bb4f48ff07a9","Type":"ContainerStarted","Data":"30a198ef9ccb9d71f41d3c94d6ff0fa8ac14bca1ee8c07cd86837b74c85861e7"} Jan 05 20:30:02 crc kubenswrapper[4754]: I0105 20:30:02.502497 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerStarted","Data":"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484"} Jan 05 20:30:02 crc kubenswrapper[4754]: I0105 20:30:02.508619 4754 generic.go:334] "Generic (PLEG): container finished" podID="623e75ec-7533-4b8c-8869-bb4f48ff07a9" containerID="2fa8e0152a442e236ef3b1008e34222f49f57ad76f62aa8a4c44de8bba1ae353" exitCode=0 Jan 05 20:30:02 crc kubenswrapper[4754]: I0105 20:30:02.508667 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" event={"ID":"623e75ec-7533-4b8c-8869-bb4f48ff07a9","Type":"ContainerDied","Data":"2fa8e0152a442e236ef3b1008e34222f49f57ad76f62aa8a4c44de8bba1ae353"} Jan 05 20:30:03 crc kubenswrapper[4754]: I0105 20:30:03.521244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerStarted","Data":"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735"} Jan 05 20:30:03 crc kubenswrapper[4754]: I0105 20:30:03.779776 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 20:30:03 crc kubenswrapper[4754]: I0105 20:30:03.779896 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.052039 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.092001 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.206143 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28zff\" (UniqueName: \"kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff\") pod \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.207227 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume\") pod \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.207376 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume\") pod \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\" (UID: \"623e75ec-7533-4b8c-8869-bb4f48ff07a9\") " Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.220559 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "623e75ec-7533-4b8c-8869-bb4f48ff07a9" (UID: "623e75ec-7533-4b8c-8869-bb4f48ff07a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.227638 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "623e75ec-7533-4b8c-8869-bb4f48ff07a9" (UID: "623e75ec-7533-4b8c-8869-bb4f48ff07a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.245501 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff" (OuterVolumeSpecName: "kube-api-access-28zff") pod "623e75ec-7533-4b8c-8869-bb4f48ff07a9" (UID: "623e75ec-7533-4b8c-8869-bb4f48ff07a9"). InnerVolumeSpecName "kube-api-access-28zff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.319930 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28zff\" (UniqueName: \"kubernetes.io/projected/623e75ec-7533-4b8c-8869-bb4f48ff07a9-kube-api-access-28zff\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.319964 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623e75ec-7533-4b8c-8869-bb4f48ff07a9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.319974 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623e75ec-7533-4b8c-8869-bb4f48ff07a9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.552947 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.557701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg" event={"ID":"623e75ec-7533-4b8c-8869-bb4f48ff07a9","Type":"ContainerDied","Data":"30a198ef9ccb9d71f41d3c94d6ff0fa8ac14bca1ee8c07cd86837b74c85861e7"} Jan 05 20:30:04 crc kubenswrapper[4754]: I0105 20:30:04.557754 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a198ef9ccb9d71f41d3c94d6ff0fa8ac14bca1ee8c07cd86837b74c85861e7" Jan 05 20:30:05 crc kubenswrapper[4754]: I0105 20:30:05.580711 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerStarted","Data":"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea"} Jan 05 20:30:05 crc kubenswrapper[4754]: I0105 20:30:05.580971 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:30:05 crc kubenswrapper[4754]: I0105 20:30:05.606150 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.21640655 podStartE2EDuration="7.606133594s" podCreationTimestamp="2026-01-05 20:29:58 +0000 UTC" firstStartedPulling="2026-01-05 20:29:59.471615813 +0000 UTC m=+1486.180799687" lastFinishedPulling="2026-01-05 20:30:04.861342857 +0000 UTC m=+1491.570526731" observedRunningTime="2026-01-05 20:30:05.603628188 +0000 UTC m=+1492.312812062" watchObservedRunningTime="2026-01-05 20:30:05.606133594 +0000 UTC m=+1492.315317468" Jan 05 20:30:06 crc kubenswrapper[4754]: I0105 20:30:06.597121 4754 generic.go:334] "Generic (PLEG): container finished" podID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" exitCode=0 Jan 05 20:30:06 crc kubenswrapper[4754]: I0105 20:30:06.598476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cfdcfcf78-fd4fw" event={"ID":"081c0364-d64a-4977-b1ac-0fcb2bdc5bab","Type":"ContainerDied","Data":"9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88"} Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.011391 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.100636 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom\") pod \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.100741 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knm88\" (UniqueName: \"kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88\") pod \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.100777 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle\") pod \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.101037 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data\") pod \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\" (UID: \"081c0364-d64a-4977-b1ac-0fcb2bdc5bab\") " Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.108945 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "081c0364-d64a-4977-b1ac-0fcb2bdc5bab" (UID: "081c0364-d64a-4977-b1ac-0fcb2bdc5bab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.111429 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88" (OuterVolumeSpecName: "kube-api-access-knm88") pod "081c0364-d64a-4977-b1ac-0fcb2bdc5bab" (UID: "081c0364-d64a-4977-b1ac-0fcb2bdc5bab"). InnerVolumeSpecName "kube-api-access-knm88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.135196 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "081c0364-d64a-4977-b1ac-0fcb2bdc5bab" (UID: "081c0364-d64a-4977-b1ac-0fcb2bdc5bab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.184134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data" (OuterVolumeSpecName: "config-data") pod "081c0364-d64a-4977-b1ac-0fcb2bdc5bab" (UID: "081c0364-d64a-4977-b1ac-0fcb2bdc5bab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.203226 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.203558 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.203570 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knm88\" (UniqueName: \"kubernetes.io/projected/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-kube-api-access-knm88\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.203579 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081c0364-d64a-4977-b1ac-0fcb2bdc5bab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.653082 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cfdcfcf78-fd4fw" event={"ID":"081c0364-d64a-4977-b1ac-0fcb2bdc5bab","Type":"ContainerDied","Data":"dd8a8adb04752bb2482089147cc5800771d4fbe0623c86e34f5eeeb30bd8bd9d"} Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.653187 4754 scope.go:117] "RemoveContainer" containerID="9b9b025e8816c5ebd6a8dda065b81f0a731c993e87d3be9cc981be60bcd99c88" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.653543 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cfdcfcf78-fd4fw" Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.942890 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:30:07 crc kubenswrapper[4754]: I0105 20:30:07.965564 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-cfdcfcf78-fd4fw"] Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.175280 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.175539 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-central-agent" containerID="cri-o://9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a" gracePeriod=30 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.176283 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="proxy-httpd" containerID="cri-o://bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea" gracePeriod=30 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.176470 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-notification-agent" containerID="cri-o://2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484" gracePeriod=30 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.176517 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="sg-core" containerID="cri-o://514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735" gracePeriod=30 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.672707 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerID="bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea" exitCode=0 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.673154 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerID="514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735" exitCode=2 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.673162 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerID="2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484" exitCode=0 Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.672786 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerDied","Data":"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea"} Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.673220 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerDied","Data":"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735"} Jan 05 20:30:08 crc kubenswrapper[4754]: I0105 20:30:08.673237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerDied","Data":"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484"} Jan 05 20:30:09 crc kubenswrapper[4754]: I0105 20:30:09.614863 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" path="/var/lib/kubelet/pods/081c0364-d64a-4977-b1ac-0fcb2bdc5bab/volumes" Jan 05 20:30:17 crc kubenswrapper[4754]: I0105 20:30:17.844042 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" event={"ID":"1192f0c3-2df8-44ea-a767-937f965b46f3","Type":"ContainerStarted","Data":"24c367b8e96f1d402bc3faae607541f034766953f4afd27b72c9e6c7b7b9256a"} Jan 05 20:30:17 crc kubenswrapper[4754]: I0105 20:30:17.865096 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" podStartSLOduration=3.284374378 podStartE2EDuration="21.865077452s" podCreationTimestamp="2026-01-05 20:29:56 +0000 UTC" firstStartedPulling="2026-01-05 20:29:57.764771227 +0000 UTC m=+1484.473955101" lastFinishedPulling="2026-01-05 20:30:16.345474311 +0000 UTC m=+1503.054658175" observedRunningTime="2026-01-05 20:30:17.856164908 +0000 UTC m=+1504.565348792" watchObservedRunningTime="2026-01-05 20:30:17.865077452 +0000 UTC m=+1504.574261326" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.108925 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.108983 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.745275 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:18 crc kubenswrapper[4754]: E0105 20:30:18.745809 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.745828 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" Jan 05 20:30:18 crc kubenswrapper[4754]: E0105 20:30:18.745851 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623e75ec-7533-4b8c-8869-bb4f48ff07a9" containerName="collect-profiles" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.745858 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="623e75ec-7533-4b8c-8869-bb4f48ff07a9" containerName="collect-profiles" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.746071 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="081c0364-d64a-4977-b1ac-0fcb2bdc5bab" containerName="heat-engine" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.746104 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="623e75ec-7533-4b8c-8869-bb4f48ff07a9" containerName="collect-profiles" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.747772 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.779520 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.861141 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfd6r\" (UniqueName: \"kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.861472 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.861525 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.964462 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfd6r\" (UniqueName: \"kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.964779 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.964947 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.965132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:18 crc kubenswrapper[4754]: I0105 20:30:18.965444 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:19 crc kubenswrapper[4754]: I0105 20:30:19.001032 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfd6r\" (UniqueName: \"kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r\") pod \"redhat-marketplace-lqlmh\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:19 crc kubenswrapper[4754]: I0105 20:30:19.075015 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:19 crc kubenswrapper[4754]: I0105 20:30:19.602927 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:19 crc kubenswrapper[4754]: I0105 20:30:19.885153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerStarted","Data":"50c78ff9bd682caa38db87402bc8746078e056c66d1ad68b71a61df7ef682a39"} Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.826061 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.910775 4754 generic.go:334] "Generic (PLEG): container finished" podID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerID="9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a" exitCode=0 Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.910828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerDied","Data":"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a"} Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.910855 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7fb48ad-b043-4119-8a98-dbbd166c20e2","Type":"ContainerDied","Data":"53245616af107a8bd083b72874fc4233a09f9f26c81d09c64c71dc50283164fc"} Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.910872 4754 scope.go:117] "RemoveContainer" containerID="bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.910993 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.915371 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerDied","Data":"754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b"} Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.915451 4754 generic.go:334] "Generic (PLEG): container finished" podID="320790ad-397e-4602-a40f-cb5c17f4f058" containerID="754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b" exitCode=0 Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.935925 4754 scope.go:117] "RemoveContainer" containerID="514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948180 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948390 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x59gh\" (UniqueName: \"kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948426 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948533 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948578 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948615 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.948632 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts\") pod \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\" (UID: \"f7fb48ad-b043-4119-8a98-dbbd166c20e2\") " Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.950671 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.950910 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.956397 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts" (OuterVolumeSpecName: "scripts") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.961825 4754 scope.go:117] "RemoveContainer" containerID="2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484" Jan 05 20:30:21 crc kubenswrapper[4754]: I0105 20:30:21.965692 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh" (OuterVolumeSpecName: "kube-api-access-x59gh") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "kube-api-access-x59gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.003430 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.051549 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x59gh\" (UniqueName: \"kubernetes.io/projected/f7fb48ad-b043-4119-8a98-dbbd166c20e2-kube-api-access-x59gh\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.051592 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.051604 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7fb48ad-b043-4119-8a98-dbbd166c20e2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.051617 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.051628 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.071700 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.107929 4754 scope.go:117] "RemoveContainer" containerID="9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.112374 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data" (OuterVolumeSpecName: "config-data") pod "f7fb48ad-b043-4119-8a98-dbbd166c20e2" (UID: "f7fb48ad-b043-4119-8a98-dbbd166c20e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.137363 4754 scope.go:117] "RemoveContainer" containerID="bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.137755 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea\": container with ID starting with bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea not found: ID does not exist" containerID="bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.137786 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea"} err="failed to get container status \"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea\": rpc error: code = NotFound desc = could not find container \"bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea\": container with ID starting with bda1abaf74c1789c7613a58cf0498c2aa25dbcfd156915db9c94fe108951a0ea not found: ID does not exist" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.137806 4754 scope.go:117] "RemoveContainer" containerID="514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.137999 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735\": container with ID starting with 514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735 not found: ID does not exist" containerID="514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.138028 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735"} err="failed to get container status \"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735\": rpc error: code = NotFound desc = could not find container \"514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735\": container with ID starting with 514bfe13c7e1cb067fd15b30adc7e84428fa40f9e6228c1a1741d84dcfe90735 not found: ID does not exist" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.138041 4754 scope.go:117] "RemoveContainer" containerID="2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.138260 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484\": container with ID starting with 2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484 not found: ID does not exist" containerID="2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.138280 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484"} err="failed to get container status \"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484\": rpc error: code = NotFound desc = could not find container \"2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484\": container with ID starting with 2b32e86ab52abeb5c17dcd8a40d47a4900fe89491a177db1e87d2f3dbe127484 not found: ID does not exist" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.138309 4754 scope.go:117] "RemoveContainer" containerID="9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.138541 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a\": container with ID starting with 9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a not found: ID does not exist" containerID="9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.138562 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a"} err="failed to get container status \"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a\": rpc error: code = NotFound desc = could not find container \"9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a\": container with ID starting with 9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a not found: ID does not exist" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.153370 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.153402 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fb48ad-b043-4119-8a98-dbbd166c20e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.255271 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.270734 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.281942 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.282661 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-central-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.282730 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-central-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.282814 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-notification-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.282865 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-notification-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.282920 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="sg-core" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.282981 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="sg-core" Jan 05 20:30:22 crc kubenswrapper[4754]: E0105 20:30:22.283050 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="proxy-httpd" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.283104 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="proxy-httpd" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.283383 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="proxy-httpd" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.283464 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-notification-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.283527 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="sg-core" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.283601 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" containerName="ceilometer-central-agent" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.285705 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.288215 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.288436 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.292879 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.460487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.460583 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.460669 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzr9b\" (UniqueName: \"kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.460769 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.460985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.461098 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.461214 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.562928 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzr9b\" (UniqueName: \"kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.562981 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563039 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563116 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563189 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563236 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563265 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.563921 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.567410 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.567657 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.568022 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.572061 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.591024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzr9b\" (UniqueName: \"kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b\") pod \"ceilometer-0\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " pod="openstack/ceilometer-0" Jan 05 20:30:22 crc kubenswrapper[4754]: I0105 20:30:22.661210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:23 crc kubenswrapper[4754]: I0105 20:30:23.205132 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:23 crc kubenswrapper[4754]: I0105 20:30:23.605763 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fb48ad-b043-4119-8a98-dbbd166c20e2" path="/var/lib/kubelet/pods/f7fb48ad-b043-4119-8a98-dbbd166c20e2/volumes" Jan 05 20:30:23 crc kubenswrapper[4754]: I0105 20:30:23.963514 4754 generic.go:334] "Generic (PLEG): container finished" podID="320790ad-397e-4602-a40f-cb5c17f4f058" containerID="656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b" exitCode=0 Jan 05 20:30:23 crc kubenswrapper[4754]: I0105 20:30:23.963704 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerDied","Data":"656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b"} Jan 05 20:30:23 crc kubenswrapper[4754]: I0105 20:30:23.967490 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerStarted","Data":"ce111058c144ec301ca3bb15ecc5ea2dc83e4ee11f84bea73fc49a5378d38f92"} Jan 05 20:30:24 crc kubenswrapper[4754]: I0105 20:30:24.977969 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerStarted","Data":"3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945"} Jan 05 20:30:25 crc kubenswrapper[4754]: I0105 20:30:25.993214 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerStarted","Data":"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2"} Jan 05 20:30:26 crc kubenswrapper[4754]: I0105 20:30:26.002237 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerStarted","Data":"23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9"} Jan 05 20:30:26 crc kubenswrapper[4754]: I0105 20:30:26.019475 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqlmh" podStartSLOduration=4.614052897 podStartE2EDuration="8.019457394s" podCreationTimestamp="2026-01-05 20:30:18 +0000 UTC" firstStartedPulling="2026-01-05 20:30:21.923493035 +0000 UTC m=+1508.632676909" lastFinishedPulling="2026-01-05 20:30:25.328897532 +0000 UTC m=+1512.038081406" observedRunningTime="2026-01-05 20:30:26.012885022 +0000 UTC m=+1512.722068886" watchObservedRunningTime="2026-01-05 20:30:26.019457394 +0000 UTC m=+1512.728641268" Jan 05 20:30:27 crc kubenswrapper[4754]: I0105 20:30:27.016086 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerStarted","Data":"beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb"} Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.038963 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerStarted","Data":"d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf"} Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.039531 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.066415 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.017993431 podStartE2EDuration="7.066392769s" podCreationTimestamp="2026-01-05 20:30:22 +0000 UTC" firstStartedPulling="2026-01-05 20:30:23.206092313 +0000 UTC m=+1509.915276187" lastFinishedPulling="2026-01-05 20:30:28.254491651 +0000 UTC m=+1514.963675525" observedRunningTime="2026-01-05 20:30:29.063555684 +0000 UTC m=+1515.772739558" watchObservedRunningTime="2026-01-05 20:30:29.066392769 +0000 UTC m=+1515.775576643" Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.075212 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.075577 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:29 crc kubenswrapper[4754]: I0105 20:30:29.126821 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:30 crc kubenswrapper[4754]: I0105 20:30:30.118695 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:30 crc kubenswrapper[4754]: I0105 20:30:30.226617 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.068122 4754 generic.go:334] "Generic (PLEG): container finished" podID="1192f0c3-2df8-44ea-a767-937f965b46f3" containerID="24c367b8e96f1d402bc3faae607541f034766953f4afd27b72c9e6c7b7b9256a" exitCode=0 Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.068519 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lqlmh" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="registry-server" containerID="cri-o://e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2" gracePeriod=2 Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.068776 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" event={"ID":"1192f0c3-2df8-44ea-a767-937f965b46f3","Type":"ContainerDied","Data":"24c367b8e96f1d402bc3faae607541f034766953f4afd27b72c9e6c7b7b9256a"} Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.637265 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.744266 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities\") pod \"320790ad-397e-4602-a40f-cb5c17f4f058\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.744404 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content\") pod \"320790ad-397e-4602-a40f-cb5c17f4f058\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.744778 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfd6r\" (UniqueName: \"kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r\") pod \"320790ad-397e-4602-a40f-cb5c17f4f058\" (UID: \"320790ad-397e-4602-a40f-cb5c17f4f058\") " Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.744911 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities" (OuterVolumeSpecName: "utilities") pod "320790ad-397e-4602-a40f-cb5c17f4f058" (UID: "320790ad-397e-4602-a40f-cb5c17f4f058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.745473 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.751943 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r" (OuterVolumeSpecName: "kube-api-access-kfd6r") pod "320790ad-397e-4602-a40f-cb5c17f4f058" (UID: "320790ad-397e-4602-a40f-cb5c17f4f058"). InnerVolumeSpecName "kube-api-access-kfd6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.770205 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320790ad-397e-4602-a40f-cb5c17f4f058" (UID: "320790ad-397e-4602-a40f-cb5c17f4f058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.849154 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfd6r\" (UniqueName: \"kubernetes.io/projected/320790ad-397e-4602-a40f-cb5c17f4f058-kube-api-access-kfd6r\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:32 crc kubenswrapper[4754]: I0105 20:30:32.849205 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320790ad-397e-4602-a40f-cb5c17f4f058-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.075978 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.076566 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="proxy-httpd" containerID="cri-o://d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf" gracePeriod=30 Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.076743 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="sg-core" containerID="cri-o://beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb" gracePeriod=30 Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.076785 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-notification-agent" containerID="cri-o://23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9" gracePeriod=30 Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.076906 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-central-agent" containerID="cri-o://3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945" gracePeriod=30 Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.081363 4754 generic.go:334] "Generic (PLEG): container finished" podID="320790ad-397e-4602-a40f-cb5c17f4f058" containerID="e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2" exitCode=0 Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.081527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerDied","Data":"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2"} Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.081558 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqlmh" event={"ID":"320790ad-397e-4602-a40f-cb5c17f4f058","Type":"ContainerDied","Data":"50c78ff9bd682caa38db87402bc8746078e056c66d1ad68b71a61df7ef682a39"} Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.081577 4754 scope.go:117] "RemoveContainer" containerID="e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.081698 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqlmh" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.140746 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.157210 4754 scope.go:117] "RemoveContainer" containerID="656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.176735 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqlmh"] Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.261718 4754 scope.go:117] "RemoveContainer" containerID="754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.354497 4754 scope.go:117] "RemoveContainer" containerID="e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2" Jan 05 20:30:33 crc kubenswrapper[4754]: E0105 20:30:33.354980 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2\": container with ID starting with e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2 not found: ID does not exist" containerID="e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.355005 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2"} err="failed to get container status \"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2\": rpc error: code = NotFound desc = could not find container \"e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2\": container with ID starting with e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2 not found: ID does not exist" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.355025 4754 scope.go:117] "RemoveContainer" containerID="656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b" Jan 05 20:30:33 crc kubenswrapper[4754]: E0105 20:30:33.355206 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b\": container with ID starting with 656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b not found: ID does not exist" containerID="656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.355225 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b"} err="failed to get container status \"656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b\": rpc error: code = NotFound desc = could not find container \"656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b\": container with ID starting with 656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b not found: ID does not exist" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.355240 4754 scope.go:117] "RemoveContainer" containerID="754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b" Jan 05 20:30:33 crc kubenswrapper[4754]: E0105 20:30:33.355422 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b\": container with ID starting with 754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b not found: ID does not exist" containerID="754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.355437 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b"} err="failed to get container status \"754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b\": rpc error: code = NotFound desc = could not find container \"754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b\": container with ID starting with 754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b not found: ID does not exist" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.603627 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.620363 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" path="/var/lib/kubelet/pods/320790ad-397e-4602-a40f-cb5c17f4f058/volumes" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.700714 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle\") pod \"1192f0c3-2df8-44ea-a767-937f965b46f3\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.700782 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts\") pod \"1192f0c3-2df8-44ea-a767-937f965b46f3\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.700810 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74q4c\" (UniqueName: \"kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c\") pod \"1192f0c3-2df8-44ea-a767-937f965b46f3\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.701121 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data\") pod \"1192f0c3-2df8-44ea-a767-937f965b46f3\" (UID: \"1192f0c3-2df8-44ea-a767-937f965b46f3\") " Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.711886 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts" (OuterVolumeSpecName: "scripts") pod "1192f0c3-2df8-44ea-a767-937f965b46f3" (UID: "1192f0c3-2df8-44ea-a767-937f965b46f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.724365 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c" (OuterVolumeSpecName: "kube-api-access-74q4c") pod "1192f0c3-2df8-44ea-a767-937f965b46f3" (UID: "1192f0c3-2df8-44ea-a767-937f965b46f3"). InnerVolumeSpecName "kube-api-access-74q4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.740026 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data" (OuterVolumeSpecName: "config-data") pod "1192f0c3-2df8-44ea-a767-937f965b46f3" (UID: "1192f0c3-2df8-44ea-a767-937f965b46f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.752592 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1192f0c3-2df8-44ea-a767-937f965b46f3" (UID: "1192f0c3-2df8-44ea-a767-937f965b46f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.805392 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.805446 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.805460 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1192f0c3-2df8-44ea-a767-937f965b46f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:33 crc kubenswrapper[4754]: I0105 20:30:33.805470 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74q4c\" (UniqueName: \"kubernetes.io/projected/1192f0c3-2df8-44ea-a767-937f965b46f3-kube-api-access-74q4c\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.095952 4754 generic.go:334] "Generic (PLEG): container finished" podID="5b5ccffd-7569-4152-94b6-66f64e575513" containerID="d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf" exitCode=0 Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.095995 4754 generic.go:334] "Generic (PLEG): container finished" podID="5b5ccffd-7569-4152-94b6-66f64e575513" containerID="beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb" exitCode=2 Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.096008 4754 generic.go:334] "Generic (PLEG): container finished" podID="5b5ccffd-7569-4152-94b6-66f64e575513" containerID="23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9" exitCode=0 Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.096025 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerDied","Data":"d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf"} Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.096090 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerDied","Data":"beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb"} Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.096103 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerDied","Data":"23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9"} Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.098703 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" event={"ID":"1192f0c3-2df8-44ea-a767-937f965b46f3","Type":"ContainerDied","Data":"83517df70a8738c069275f4ad270f59b3c1b6599519f3fdf002ba8e67f85a925"} Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.098834 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83517df70a8738c069275f4ad270f59b3c1b6599519f3fdf002ba8e67f85a925" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.098768 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwk7q" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.220352 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 20:30:34 crc kubenswrapper[4754]: E0105 20:30:34.221261 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="extract-content" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221281 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="extract-content" Jan 05 20:30:34 crc kubenswrapper[4754]: E0105 20:30:34.221325 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="extract-utilities" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221336 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="extract-utilities" Jan 05 20:30:34 crc kubenswrapper[4754]: E0105 20:30:34.221380 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1192f0c3-2df8-44ea-a767-937f965b46f3" containerName="nova-cell0-conductor-db-sync" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221391 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1192f0c3-2df8-44ea-a767-937f965b46f3" containerName="nova-cell0-conductor-db-sync" Jan 05 20:30:34 crc kubenswrapper[4754]: E0105 20:30:34.221400 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="registry-server" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221407 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="registry-server" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221628 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="320790ad-397e-4602-a40f-cb5c17f4f058" containerName="registry-server" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.221663 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1192f0c3-2df8-44ea-a767-937f965b46f3" containerName="nova-cell0-conductor-db-sync" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.222708 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.225218 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rmk5w" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.225458 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.253340 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.321736 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz97t\" (UniqueName: \"kubernetes.io/projected/0c8756e1-7250-4b46-9386-f4fb516eed62-kube-api-access-sz97t\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.321906 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.322054 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.425068 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.425152 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz97t\" (UniqueName: \"kubernetes.io/projected/0c8756e1-7250-4b46-9386-f4fb516eed62-kube-api-access-sz97t\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.425276 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.431000 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.431900 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c8756e1-7250-4b46-9386-f4fb516eed62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.440632 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz97t\" (UniqueName: \"kubernetes.io/projected/0c8756e1-7250-4b46-9386-f4fb516eed62-kube-api-access-sz97t\") pod \"nova-cell0-conductor-0\" (UID: \"0c8756e1-7250-4b46-9386-f4fb516eed62\") " pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:34 crc kubenswrapper[4754]: I0105 20:30:34.611284 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:35 crc kubenswrapper[4754]: I0105 20:30:35.144236 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 20:30:36 crc kubenswrapper[4754]: I0105 20:30:36.123487 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0c8756e1-7250-4b46-9386-f4fb516eed62","Type":"ContainerStarted","Data":"30494096adcdce414a512b7dcc1a3c0048ebeaba78edd0aad9424e5035fdfe40"} Jan 05 20:30:36 crc kubenswrapper[4754]: I0105 20:30:36.123746 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0c8756e1-7250-4b46-9386-f4fb516eed62","Type":"ContainerStarted","Data":"0e2ed3a020754741ac64e313b7d17ae2f5924e050a36b8d950048f5c65862984"} Jan 05 20:30:36 crc kubenswrapper[4754]: I0105 20:30:36.123768 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:36 crc kubenswrapper[4754]: I0105 20:30:36.142809 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.142783056 podStartE2EDuration="2.142783056s" podCreationTimestamp="2026-01-05 20:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:36.142435077 +0000 UTC m=+1522.851618951" watchObservedRunningTime="2026-01-05 20:30:36.142783056 +0000 UTC m=+1522.851966930" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.144485 4754 generic.go:334] "Generic (PLEG): container finished" podID="5b5ccffd-7569-4152-94b6-66f64e575513" containerID="3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945" exitCode=0 Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.146409 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerDied","Data":"3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945"} Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.499531 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620403 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzr9b\" (UniqueName: \"kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620491 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620586 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620613 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620769 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620910 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.620951 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.621023 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd\") pod \"5b5ccffd-7569-4152-94b6-66f64e575513\" (UID: \"5b5ccffd-7569-4152-94b6-66f64e575513\") " Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.621590 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.621664 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.628317 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts" (OuterVolumeSpecName: "scripts") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.628577 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b" (OuterVolumeSpecName: "kube-api-access-pzr9b") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "kube-api-access-pzr9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.656756 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.724124 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.724363 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.724429 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b5ccffd-7569-4152-94b6-66f64e575513-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.724527 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzr9b\" (UniqueName: \"kubernetes.io/projected/5b5ccffd-7569-4152-94b6-66f64e575513-kube-api-access-pzr9b\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.749008 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.750925 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data" (OuterVolumeSpecName: "config-data") pod "5b5ccffd-7569-4152-94b6-66f64e575513" (UID: "5b5ccffd-7569-4152-94b6-66f64e575513"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.827158 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:37 crc kubenswrapper[4754]: I0105 20:30:37.827212 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5ccffd-7569-4152-94b6-66f64e575513-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.159700 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b5ccffd-7569-4152-94b6-66f64e575513","Type":"ContainerDied","Data":"ce111058c144ec301ca3bb15ecc5ea2dc83e4ee11f84bea73fc49a5378d38f92"} Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.159760 4754 scope.go:117] "RemoveContainer" containerID="d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.159836 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.183951 4754 scope.go:117] "RemoveContainer" containerID="beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.217859 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.228366 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.229320 4754 scope.go:117] "RemoveContainer" containerID="23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.257129 4754 scope.go:117] "RemoveContainer" containerID="3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.258561 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:38 crc kubenswrapper[4754]: E0105 20:30:38.260084 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="proxy-httpd" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.260208 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="proxy-httpd" Jan 05 20:30:38 crc kubenswrapper[4754]: E0105 20:30:38.260338 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-notification-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.260427 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-notification-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: E0105 20:30:38.260537 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-central-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.260663 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-central-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: E0105 20:30:38.260798 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="sg-core" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.260881 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="sg-core" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.261328 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-central-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.261448 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="proxy-httpd" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.261596 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="ceilometer-notification-agent" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.261690 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" containerName="sg-core" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.276857 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.285514 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.294158 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.294682 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.340341 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.340678 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.340832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.340999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjn47\" (UniqueName: \"kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.341083 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.341161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.341238 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.444466 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.444568 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjn47\" (UniqueName: \"kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.444597 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.444978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.445625 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.445014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.445785 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.445890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.446093 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.450179 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.458769 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.461918 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.462948 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.469658 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjn47\" (UniqueName: \"kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47\") pod \"ceilometer-0\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " pod="openstack/ceilometer-0" Jan 05 20:30:38 crc kubenswrapper[4754]: I0105 20:30:38.628668 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:39 crc kubenswrapper[4754]: I0105 20:30:39.129407 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:39 crc kubenswrapper[4754]: I0105 20:30:39.170937 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerStarted","Data":"ff77a8b2e932245b540fd9ede7abced20ede6161d41723b4e23cd05ac2769e09"} Jan 05 20:30:39 crc kubenswrapper[4754]: I0105 20:30:39.619098 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5ccffd-7569-4152-94b6-66f64e575513" path="/var/lib/kubelet/pods/5b5ccffd-7569-4152-94b6-66f64e575513/volumes" Jan 05 20:30:40 crc kubenswrapper[4754]: I0105 20:30:40.182480 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerStarted","Data":"54eb419bb2f13741b897510cb5a3e434ded04e4fa83a427d4a1d6dfd6c3c9ad9"} Jan 05 20:30:41 crc kubenswrapper[4754]: I0105 20:30:41.199041 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerStarted","Data":"fe1b9f16e609a047bc992cb6663dc45ee0488256363ce255fe0bd7799ba290c1"} Jan 05 20:30:42 crc kubenswrapper[4754]: I0105 20:30:42.210779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerStarted","Data":"1685407605c5cb5dfb16712412bd3f24797ed058587d314bda057731c7306a87"} Jan 05 20:30:44 crc kubenswrapper[4754]: I0105 20:30:44.239017 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerStarted","Data":"b4769049bfe422b9346c6250e2461c87f02bbdefd459f8fa528769dc40445c81"} Jan 05 20:30:44 crc kubenswrapper[4754]: I0105 20:30:44.239483 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:30:44 crc kubenswrapper[4754]: I0105 20:30:44.261974 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.14671278 podStartE2EDuration="6.261951084s" podCreationTimestamp="2026-01-05 20:30:38 +0000 UTC" firstStartedPulling="2026-01-05 20:30:39.130430715 +0000 UTC m=+1525.839614589" lastFinishedPulling="2026-01-05 20:30:43.245668979 +0000 UTC m=+1529.954852893" observedRunningTime="2026-01-05 20:30:44.257239191 +0000 UTC m=+1530.966423065" watchObservedRunningTime="2026-01-05 20:30:44.261951084 +0000 UTC m=+1530.971134958" Jan 05 20:30:44 crc kubenswrapper[4754]: I0105 20:30:44.647171 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.161139 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mgrzb"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.162597 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.165771 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.165847 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.219315 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.219416 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.219456 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njr7\" (UniqueName: \"kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.219615 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.220888 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mgrzb"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.321441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.321579 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.321632 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.321653 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njr7\" (UniqueName: \"kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.328963 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.328999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.329536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.350715 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njr7\" (UniqueName: \"kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7\") pod \"nova-cell0-cell-mapping-mgrzb\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.376086 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.378552 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.383631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.395661 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.428564 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.428692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.428728 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stqww\" (UniqueName: \"kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.428745 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.484633 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.495107 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.496750 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.503398 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.520466 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.522813 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.528928 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stqww\" (UniqueName: \"kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530632 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530711 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2n5k\" (UniqueName: \"kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530769 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530858 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530936 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.530970 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.531444 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.537900 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.537910 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.548514 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.594967 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stqww\" (UniqueName: \"kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww\") pod \"nova-api-0\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643520 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643633 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643718 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2n5k\" (UniqueName: \"kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643764 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643933 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643958 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc64\" (UniqueName: \"kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.643986 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.657824 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.662410 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.662466 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.663946 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.666637 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.698353 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.700336 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.732930 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2n5k\" (UniqueName: \"kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k\") pod \"nova-scheduler-0\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.747942 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.748999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.749181 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.749281 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57kg\" (UniqueName: \"kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.749409 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.749543 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751121 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751581 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751580 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751767 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbc64\" (UniqueName: \"kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751786 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751813 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.751854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vhs\" (UniqueName: \"kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.753518 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.757868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.769036 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbc64\" (UniqueName: \"kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.772174 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.775026 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " pod="openstack/nova-metadata-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.792011 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855620 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vhs\" (UniqueName: \"kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855860 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855909 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855946 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57kg\" (UniqueName: \"kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.855999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.856073 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.868014 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.875505 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.876310 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.876620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.878971 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.879002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vhs\" (UniqueName: \"kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs\") pod \"dnsmasq-dns-7877d89589-b45w9\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.882520 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.893620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57kg\" (UniqueName: \"kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:45 crc kubenswrapper[4754]: I0105 20:30:45.918471 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.076913 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.096750 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.104382 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.167581 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mgrzb"] Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.278339 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mgrzb" event={"ID":"89edcd71-f0ac-4bf6-a1cf-9aac0d041906","Type":"ContainerStarted","Data":"58bc5c17b6fbc937c7a44f43c5f7625df1c280afd13995c1ca732e503858175b"} Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.401743 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:30:46 crc kubenswrapper[4754]: I0105 20:30:46.411129 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.001111 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.167430 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.297949 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07fcdc38-02c8-43b4-908f-dc32a017584c","Type":"ContainerStarted","Data":"31bed884007d761ce526074879bf436f852e9b66a0b80cfbb40e918429a57c0a"} Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.303801 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerStarted","Data":"35fc553c93c9aad5dec75ad84d0432362c8bb1e7e289586db65997c22fcdd310"} Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.304889 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.326787 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerStarted","Data":"c03b358dba8062837508523d69ee3b6dc046427bf277b1c726a0db05eec75c6b"} Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.348743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e9d5917-d36c-4f58-8154-787b4a799e88","Type":"ContainerStarted","Data":"52769c65a5951c0697401b21bd4eb09df02729ebb51a777fe75ed7ab95b0fd6a"} Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.361191 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mgrzb" event={"ID":"89edcd71-f0ac-4bf6-a1cf-9aac0d041906","Type":"ContainerStarted","Data":"61f5a488850b73c362d6b88ed4ee7d3f254b08dc53bfea5806ebd9df9fdebb1b"} Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.405966 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mgrzb" podStartSLOduration=2.405947018 podStartE2EDuration="2.405947018s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:47.386199809 +0000 UTC m=+1534.095383673" watchObservedRunningTime="2026-01-05 20:30:47.405947018 +0000 UTC m=+1534.115130892" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.508804 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hd9mq"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.511469 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.536823 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.537172 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.548160 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hd9mq"] Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.611396 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47nz\" (UniqueName: \"kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.611469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.611507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.623513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.726802 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47nz\" (UniqueName: \"kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.726870 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.726917 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.727034 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.740849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.743050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.750186 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.750721 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47nz\" (UniqueName: \"kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz\") pod \"nova-cell1-conductor-db-sync-hd9mq\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.790415 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.790857 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-3f77168eac9fed1537330bac3854597fc9ced4d4d835f93ffc65240548873945.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.791009 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-conmon-e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-conmon-e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.792470 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.792640 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-23f973fe9d1d539dba30fe6a937c1666b107724623abc4fb79b25656462dceb9.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.792773 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-e661e4c7e14c5dc87758ea4d9b5401babb82189bacda53038bea1ba6461c78e2.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.797706 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.801874 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-beba2bc2cb011c48c0c552b949af2fcf6d8bd272067193205a3381d0ef194ceb.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.811031 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-conmon-d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.811093 4754 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b5ccffd_7569_4152_94b6_66f64e575513.slice/crio-d25af4d3c605a715cae78346b7afd96dc3ffb94ad95ebc53c5028c1c9acf80bf.scope: no such file or directory Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.814682 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b.scope WatchSource:0}: Error finding container 754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b: Status 404 returned error can't find the container with id 754f78f28f4a4a67df88f313dff2616b2934c7b378d47556a6c00c81be6af32b Jan 05 20:30:47 crc kubenswrapper[4754]: W0105 20:30:47.825250 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320790ad_397e_4602_a40f_cb5c17f4f058.slice/crio-656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b.scope WatchSource:0}: Error finding container 656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b: Status 404 returned error can't find the container with id 656578d7fab443cf4121cd672d934f808e8344cee7f6b69fe1cb8f2e788bb30b Jan 05 20:30:47 crc kubenswrapper[4754]: I0105 20:30:47.928763 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.109465 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.109509 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.109572 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.125428 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.125500 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" gracePeriod=600 Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.386095 4754 generic.go:334] "Generic (PLEG): container finished" podID="215c4148-c55a-49d8-8b2e-301cc8912519" containerID="9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53" exitCode=137 Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.387611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" event={"ID":"215c4148-c55a-49d8-8b2e-301cc8912519","Type":"ContainerDied","Data":"9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53"} Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.399574 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" exitCode=0 Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.399649 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572"} Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.399692 4754 scope.go:117] "RemoveContainer" containerID="be55096ff3dda2956a1dfef42279f31ee70ee0a455c9cf669941a07e6ba339b6" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.446193 4754 generic.go:334] "Generic (PLEG): container finished" podID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" containerID="8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c" exitCode=137 Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.446342 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc975666b-pjcfg" event={"ID":"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed","Type":"ContainerDied","Data":"8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c"} Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.486497 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-conmon-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-conmon-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.486581 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-conmon-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-conmon-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.493396 4754 generic.go:334] "Generic (PLEG): container finished" podID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerID="7fdb49ef997c7f28c9f9793307830175867e06ba93d011f513fafd64d08156a1" exitCode=0 Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.497171 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice/crio-conmon-9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice/crio-9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1192f0c3_2df8_44ea_a767_937f965b46f3.slice/crio-83517df70a8738c069275f4ad270f59b3c1b6599519f3fdf002ba8e67f85a925\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1192f0c3_2df8_44ea_a767_937f965b46f3.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.501940 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-b45w9" event={"ID":"fedfab08-bf50-4d7c-8f04-583679e20d59","Type":"ContainerDied","Data":"7fdb49ef997c7f28c9f9793307830175867e06ba93d011f513fafd64d08156a1"} Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.501984 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-b45w9" event={"ID":"fedfab08-bf50-4d7c-8f04-583679e20d59","Type":"ContainerStarted","Data":"8a8ced6b45a3c0fc56315204ef06e10f85d3333b67d3f61edc09f443d580d537"} Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.509666 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-conmon-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.512009 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice/crio-9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice/crio-conmon-9ed25582e678df0bfb1c163b3ab9a1b63c1648151095697fe695a42578b2f76a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5f31f4_24b0_4840_8a8e_04ee35cea4ed.slice/crio-8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fb48ad_b043_4119_8a98_dbbd166c20e2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215c4148_c55a_49d8_8b2e_301cc8912519.slice/crio-9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1192f0c3_2df8_44ea_a767_937f965b46f3.slice/crio-83517df70a8738c069275f4ad270f59b3c1b6599519f3fdf002ba8e67f85a925\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1192f0c3_2df8_44ea_a767_937f965b46f3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.753630 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:30:48 crc kubenswrapper[4754]: E0105 20:30:48.807280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.892770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data\") pod \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.893161 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom\") pod \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.893279 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle\") pod \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.893330 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwsc\" (UniqueName: \"kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc\") pod \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\" (UID: \"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.905558 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc" (OuterVolumeSpecName: "kube-api-access-jmwsc") pod "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" (UID: "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed"). InnerVolumeSpecName "kube-api-access-jmwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.905649 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" (UID: "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.924986 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.988229 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" (UID: "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.995357 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgn4h\" (UniqueName: \"kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h\") pod \"215c4148-c55a-49d8-8b2e-301cc8912519\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.995477 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle\") pod \"215c4148-c55a-49d8-8b2e-301cc8912519\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.995518 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom\") pod \"215c4148-c55a-49d8-8b2e-301cc8912519\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.995751 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data\") pod \"215c4148-c55a-49d8-8b2e-301cc8912519\" (UID: \"215c4148-c55a-49d8-8b2e-301cc8912519\") " Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.996244 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.996259 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:48 crc kubenswrapper[4754]: I0105 20:30:48.996268 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwsc\" (UniqueName: \"kubernetes.io/projected/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-kube-api-access-jmwsc\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.003808 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h" (OuterVolumeSpecName: "kube-api-access-rgn4h") pod "215c4148-c55a-49d8-8b2e-301cc8912519" (UID: "215c4148-c55a-49d8-8b2e-301cc8912519"). InnerVolumeSpecName "kube-api-access-rgn4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.015457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "215c4148-c55a-49d8-8b2e-301cc8912519" (UID: "215c4148-c55a-49d8-8b2e-301cc8912519"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.045076 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "215c4148-c55a-49d8-8b2e-301cc8912519" (UID: "215c4148-c55a-49d8-8b2e-301cc8912519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.045483 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data" (OuterVolumeSpecName: "config-data") pod "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" (UID: "fb5f31f4-24b0-4840-8a8e-04ee35cea4ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.098840 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.098883 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgn4h\" (UniqueName: \"kubernetes.io/projected/215c4148-c55a-49d8-8b2e-301cc8912519-kube-api-access-rgn4h\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.098896 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.098908 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.111838 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data" (OuterVolumeSpecName: "config-data") pod "215c4148-c55a-49d8-8b2e-301cc8912519" (UID: "215c4148-c55a-49d8-8b2e-301cc8912519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.141731 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hd9mq"] Jan 05 20:30:49 crc kubenswrapper[4754]: W0105 20:30:49.149470 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4556aa44_d7b9_4f17_82e1_7c08aee8d75c.slice/crio-03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e WatchSource:0}: Error finding container 03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e: Status 404 returned error can't find the container with id 03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.201226 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215c4148-c55a-49d8-8b2e-301cc8912519-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.207232 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.223947 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.511284 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-b45w9" event={"ID":"fedfab08-bf50-4d7c-8f04-583679e20d59","Type":"ContainerStarted","Data":"4467b60edb38775f310f5123fa19811bbc9bb192c189ac98946277fcabbdf9f7"} Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.511716 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.528133 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" event={"ID":"215c4148-c55a-49d8-8b2e-301cc8912519","Type":"ContainerDied","Data":"c67f2ff1363abe39b084e4fde8c5c27f6e9fef1c643b5cbb32ce6f91f9ae0db1"} Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.528429 4754 scope.go:117] "RemoveContainer" containerID="9ecba8b356c6693c643f4aac62eeb230dfeb15533ad555e713799e958b042f53" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.528580 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f447fcb5f-cgsxz" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.532069 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-b45w9" podStartSLOduration=4.532059404 podStartE2EDuration="4.532059404s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:49.525745399 +0000 UTC m=+1536.234929283" watchObservedRunningTime="2026-01-05 20:30:49.532059404 +0000 UTC m=+1536.241243278" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.561872 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:30:49 crc kubenswrapper[4754]: E0105 20:30:49.562194 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.574081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" event={"ID":"4556aa44-d7b9-4f17-82e1-7c08aee8d75c","Type":"ContainerStarted","Data":"216da0363fa04b0f41f940c392bc4fa5b8d76cdd41269d5c17b808738e5e0485"} Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.574132 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" event={"ID":"4556aa44-d7b9-4f17-82e1-7c08aee8d75c","Type":"ContainerStarted","Data":"03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e"} Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.583664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bc975666b-pjcfg" event={"ID":"fb5f31f4-24b0-4840-8a8e-04ee35cea4ed","Type":"ContainerDied","Data":"f9d15fcbf84fce28b73d175956f7c608be765810a31368b606d32506b732375d"} Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.583774 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bc975666b-pjcfg" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.691867 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.691908 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5f447fcb5f-cgsxz"] Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.745997 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" podStartSLOduration=2.745979382 podStartE2EDuration="2.745979382s" podCreationTimestamp="2026-01-05 20:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:49.702838809 +0000 UTC m=+1536.412022683" watchObservedRunningTime="2026-01-05 20:30:49.745979382 +0000 UTC m=+1536.455163256" Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.763481 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:30:49 crc kubenswrapper[4754]: I0105 20:30:49.778470 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5bc975666b-pjcfg"] Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.109083 4754 scope.go:117] "RemoveContainer" containerID="8f4595b6ed0a514c8083fb15d9319fcafd0cd0d4a6d5861933bec617da875f2c" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.163621 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:30:51 crc kubenswrapper[4754]: E0105 20:30:51.164105 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" containerName="heat-api" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.164120 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" containerName="heat-api" Jan 05 20:30:51 crc kubenswrapper[4754]: E0105 20:30:51.164154 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215c4148-c55a-49d8-8b2e-301cc8912519" containerName="heat-cfnapi" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.164160 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="215c4148-c55a-49d8-8b2e-301cc8912519" containerName="heat-cfnapi" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.164424 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="215c4148-c55a-49d8-8b2e-301cc8912519" containerName="heat-cfnapi" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.164446 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" containerName="heat-api" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.172470 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.195959 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.285330 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.285447 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.285499 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb989\" (UniqueName: \"kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.389327 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.389438 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.389485 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb989\" (UniqueName: \"kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.390637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.390972 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.430783 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb989\" (UniqueName: \"kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989\") pod \"community-operators-ls52t\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.514789 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.603465 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215c4148-c55a-49d8-8b2e-301cc8912519" path="/var/lib/kubelet/pods/215c4148-c55a-49d8-8b2e-301cc8912519/volumes" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.604010 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5f31f4-24b0-4840-8a8e-04ee35cea4ed" path="/var/lib/kubelet/pods/fb5f31f4-24b0-4840-8a8e-04ee35cea4ed/volumes" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.893909 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-t2948"] Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.895582 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t2948" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.908922 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-6fb0-account-create-update-9z9ll"] Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.913711 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.915755 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.920356 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t2948"] Jan 05 20:30:51 crc kubenswrapper[4754]: I0105 20:30:51.932833 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6fb0-account-create-update-9z9ll"] Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.003989 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.004068 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbtg\" (UniqueName: \"kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.004386 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.004668 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggld\" (UniqueName: \"kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.107793 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.107900 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ggld\" (UniqueName: \"kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.107982 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.108014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbtg\" (UniqueName: \"kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.109069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.109874 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.128827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ggld\" (UniqueName: \"kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld\") pod \"aodh-db-create-t2948\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.140925 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbtg\" (UniqueName: \"kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg\") pod \"aodh-6fb0-account-create-update-9z9ll\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.213649 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t2948" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.251285 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.413546 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.413808 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-central-agent" containerID="cri-o://54eb419bb2f13741b897510cb5a3e434ded04e4fa83a427d4a1d6dfd6c3c9ad9" gracePeriod=30 Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.414307 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="proxy-httpd" containerID="cri-o://b4769049bfe422b9346c6250e2461c87f02bbdefd459f8fa528769dc40445c81" gracePeriod=30 Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.414357 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="sg-core" containerID="cri-o://1685407605c5cb5dfb16712412bd3f24797ed058587d314bda057731c7306a87" gracePeriod=30 Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.414394 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-notification-agent" containerID="cri-o://fe1b9f16e609a047bc992cb6663dc45ee0488256363ce255fe0bd7799ba290c1" gracePeriod=30 Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.650049 4754 generic.go:334] "Generic (PLEG): container finished" podID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerID="1685407605c5cb5dfb16712412bd3f24797ed058587d314bda057731c7306a87" exitCode=2 Jan 05 20:30:52 crc kubenswrapper[4754]: I0105 20:30:52.650112 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerDied","Data":"1685407605c5cb5dfb16712412bd3f24797ed058587d314bda057731c7306a87"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.263912 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t2948"] Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.506641 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6fb0-account-create-update-9z9ll"] Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.555150 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.702931 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e9d5917-d36c-4f58-8154-787b4a799e88","Type":"ContainerStarted","Data":"a8a0c400c952ce14a4a25665b13f822c63ebd8894ea00abfd8c7fe28dc9372f7"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.703066 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3e9d5917-d36c-4f58-8154-787b4a799e88" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a8a0c400c952ce14a4a25665b13f822c63ebd8894ea00abfd8c7fe28dc9372f7" gracePeriod=30 Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.715556 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t2948" event={"ID":"2863d98c-728e-45a8-8137-50decec5ac8b","Type":"ContainerStarted","Data":"fa7b9e71ab44f9ef2670c810cbb2d61ee7e3a5775dbb56a3d6dc4a81ebdd8c62"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.715602 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t2948" event={"ID":"2863d98c-728e-45a8-8137-50decec5ac8b","Type":"ContainerStarted","Data":"32c1d28d0794bcf71f3a797f1ffdf66afee25867a1cf8454998b4a1734398b15"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.741188 4754 generic.go:334] "Generic (PLEG): container finished" podID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerID="b4769049bfe422b9346c6250e2461c87f02bbdefd459f8fa528769dc40445c81" exitCode=0 Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.741426 4754 generic.go:334] "Generic (PLEG): container finished" podID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerID="54eb419bb2f13741b897510cb5a3e434ded04e4fa83a427d4a1d6dfd6c3c9ad9" exitCode=0 Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.741425 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerDied","Data":"b4769049bfe422b9346c6250e2461c87f02bbdefd459f8fa528769dc40445c81"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.741639 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerDied","Data":"54eb419bb2f13741b897510cb5a3e434ded04e4fa83a427d4a1d6dfd6c3c9ad9"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.760932 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerStarted","Data":"e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.762516 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6fb0-account-create-update-9z9ll" event={"ID":"95650db6-2a7d-4c11-985e-4eae13a8cbaa","Type":"ContainerStarted","Data":"b341a698d25a028020e9532b50b579bfbc50a96bc0d16e9cbb05ac4add98f69b"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.777826 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07fcdc38-02c8-43b4-908f-dc32a017584c","Type":"ContainerStarted","Data":"9a7f15d5c585b26a881f865c85b92e4ce38e5cb2459dbfd41df640fe093ee4bb"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.790610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerStarted","Data":"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.797988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerStarted","Data":"3d3a74b47c2547315d04878e141c9c9adce046f72463723dfb9d691118d6a302"} Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.809717 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-t2948" podStartSLOduration=2.809700464 podStartE2EDuration="2.809700464s" podCreationTimestamp="2026-01-05 20:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:53.790350055 +0000 UTC m=+1540.499533919" watchObservedRunningTime="2026-01-05 20:30:53.809700464 +0000 UTC m=+1540.518884338" Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.816892 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.353647321 podStartE2EDuration="8.816876492s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="2026-01-05 20:30:47.170216438 +0000 UTC m=+1533.879400312" lastFinishedPulling="2026-01-05 20:30:52.633445609 +0000 UTC m=+1539.342629483" observedRunningTime="2026-01-05 20:30:53.810585357 +0000 UTC m=+1540.519769231" watchObservedRunningTime="2026-01-05 20:30:53.816876492 +0000 UTC m=+1540.526060366" Jan 05 20:30:53 crc kubenswrapper[4754]: I0105 20:30:53.834992 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.782575136 podStartE2EDuration="8.834972627s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="2026-01-05 20:30:46.574596068 +0000 UTC m=+1533.283779942" lastFinishedPulling="2026-01-05 20:30:52.626993569 +0000 UTC m=+1539.336177433" observedRunningTime="2026-01-05 20:30:53.832305157 +0000 UTC m=+1540.541489031" watchObservedRunningTime="2026-01-05 20:30:53.834972627 +0000 UTC m=+1540.544156501" Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.831110 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerStarted","Data":"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d"} Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.831244 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-log" containerID="cri-o://a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" gracePeriod=30 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.831399 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-metadata" containerID="cri-o://f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" gracePeriod=30 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.840002 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerStarted","Data":"95340ea5380f918ceaf437d7bfce1b3a100424e933dd9197baa29c658a2f8e7a"} Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.852352 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t2948" event={"ID":"2863d98c-728e-45a8-8137-50decec5ac8b","Type":"ContainerDied","Data":"fa7b9e71ab44f9ef2670c810cbb2d61ee7e3a5775dbb56a3d6dc4a81ebdd8c62"} Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.853019 4754 generic.go:334] "Generic (PLEG): container finished" podID="2863d98c-728e-45a8-8137-50decec5ac8b" containerID="fa7b9e71ab44f9ef2670c810cbb2d61ee7e3a5775dbb56a3d6dc4a81ebdd8c62" exitCode=0 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.858216 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.165600661 podStartE2EDuration="9.858197935s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="2026-01-05 20:30:47.008598914 +0000 UTC m=+1533.717782788" lastFinishedPulling="2026-01-05 20:30:52.701196188 +0000 UTC m=+1539.410380062" observedRunningTime="2026-01-05 20:30:54.849075795 +0000 UTC m=+1541.558259679" watchObservedRunningTime="2026-01-05 20:30:54.858197935 +0000 UTC m=+1541.567381809" Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.870818 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.741364512 podStartE2EDuration="9.870797286s" podCreationTimestamp="2026-01-05 20:30:45 +0000 UTC" firstStartedPulling="2026-01-05 20:30:46.569566826 +0000 UTC m=+1533.278750700" lastFinishedPulling="2026-01-05 20:30:52.6989996 +0000 UTC m=+1539.408183474" observedRunningTime="2026-01-05 20:30:54.869595164 +0000 UTC m=+1541.578779048" watchObservedRunningTime="2026-01-05 20:30:54.870797286 +0000 UTC m=+1541.579981170" Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.880532 4754 generic.go:334] "Generic (PLEG): container finished" podID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerID="fe1b9f16e609a047bc992cb6663dc45ee0488256363ce255fe0bd7799ba290c1" exitCode=0 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.880628 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerDied","Data":"fe1b9f16e609a047bc992cb6663dc45ee0488256363ce255fe0bd7799ba290c1"} Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.885724 4754 generic.go:334] "Generic (PLEG): container finished" podID="4cce9715-2736-4de9-978c-355b04462cc6" containerID="6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65" exitCode=0 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.885808 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerDied","Data":"6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65"} Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.919760 4754 generic.go:334] "Generic (PLEG): container finished" podID="95650db6-2a7d-4c11-985e-4eae13a8cbaa" containerID="e774da7d7cac749a9db5c36d80b96ed17a763a8adc64e09d25e8a5748c390674" exitCode=0 Jan 05 20:30:54 crc kubenswrapper[4754]: I0105 20:30:54.919855 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6fb0-account-create-update-9z9ll" event={"ID":"95650db6-2a7d-4c11-985e-4eae13a8cbaa","Type":"ContainerDied","Data":"e774da7d7cac749a9db5c36d80b96ed17a763a8adc64e09d25e8a5748c390674"} Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.327705 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.420150 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.420376 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.420824 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.420432 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.421242 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.422925 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjn47\" (UniqueName: \"kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.422660 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.422970 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.423976 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.424075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml\") pod \"a883f418-09dd-46df-9bcd-ec4cae97b846\" (UID: \"a883f418-09dd-46df-9bcd-ec4cae97b846\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.425482 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.425513 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a883f418-09dd-46df-9bcd-ec4cae97b846-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.426123 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts" (OuterVolumeSpecName: "scripts") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.426269 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47" (OuterVolumeSpecName: "kube-api-access-sjn47") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "kube-api-access-sjn47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.514504 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.526647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle\") pod \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.526690 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc64\" (UniqueName: \"kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64\") pod \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.526712 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data\") pod \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.526784 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs\") pod \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\" (UID: \"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941\") " Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.527346 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjn47\" (UniqueName: \"kubernetes.io/projected/a883f418-09dd-46df-9bcd-ec4cae97b846-kube-api-access-sjn47\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.527365 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.527377 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.527673 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs" (OuterVolumeSpecName: "logs") pod "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" (UID: "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.530838 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64" (OuterVolumeSpecName: "kube-api-access-hbc64") pod "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" (UID: "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941"). InnerVolumeSpecName "kube-api-access-hbc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.532276 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.558465 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data" (OuterVolumeSpecName: "config-data") pod "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" (UID: "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.562064 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" (UID: "ae0814e6-2a69-4ee4-979b-3f2b8ffdb941"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.576981 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data" (OuterVolumeSpecName: "config-data") pod "a883f418-09dd-46df-9bcd-ec4cae97b846" (UID: "a883f418-09dd-46df-9bcd-ec4cae97b846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635048 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635097 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635117 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc64\" (UniqueName: \"kubernetes.io/projected/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-kube-api-access-hbc64\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635139 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635151 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.635166 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a883f418-09dd-46df-9bcd-ec4cae97b846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.750241 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.750281 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.752180 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.752218 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.810430 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.941670 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a883f418-09dd-46df-9bcd-ec4cae97b846","Type":"ContainerDied","Data":"ff77a8b2e932245b540fd9ede7abced20ede6161d41723b4e23cd05ac2769e09"} Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.942327 4754 scope.go:117] "RemoveContainer" containerID="b4769049bfe422b9346c6250e2461c87f02bbdefd459f8fa528769dc40445c81" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.942003 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.947218 4754 generic.go:334] "Generic (PLEG): container finished" podID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerID="f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" exitCode=0 Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.947247 4754 generic.go:334] "Generic (PLEG): container finished" podID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerID="a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" exitCode=143 Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.947760 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerDied","Data":"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d"} Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.947818 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerDied","Data":"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a"} Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.947835 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ae0814e6-2a69-4ee4-979b-3f2b8ffdb941","Type":"ContainerDied","Data":"35fc553c93c9aad5dec75ad84d0432362c8bb1e7e289586db65997c22fcdd310"} Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.949334 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:55 crc kubenswrapper[4754]: I0105 20:30:55.988443 4754 scope.go:117] "RemoveContainer" containerID="1685407605c5cb5dfb16712412bd3f24797ed058587d314bda057731c7306a87" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.000670 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.026133 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.039531 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.062969 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.072743 4754 scope.go:117] "RemoveContainer" containerID="fe1b9f16e609a047bc992cb6663dc45ee0488256363ce255fe0bd7799ba290c1" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.083353 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.098091 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.109153 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.113999 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.114970 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-notification-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115007 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-notification-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.115071 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-central-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115082 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-central-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.115118 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="sg-core" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115128 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="sg-core" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.115146 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="proxy-httpd" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115154 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="proxy-httpd" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.115168 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-metadata" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115176 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-metadata" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.115192 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-log" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115200 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-log" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115620 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="sg-core" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115646 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-log" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115665 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="proxy-httpd" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115678 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" containerName="nova-metadata-metadata" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115692 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-notification-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.115716 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" containerName="ceilometer-central-agent" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.123095 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.132714 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.133084 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.144175 4754 scope.go:117] "RemoveContainer" containerID="54eb419bb2f13741b897510cb5a3e434ded04e4fa83a427d4a1d6dfd6c3c9ad9" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.160203 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.179347 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.182936 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.187789 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.188028 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.199456 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.209062 4754 scope.go:117] "RemoveContainer" containerID="f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.250960 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.251034 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.251130 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5m4\" (UniqueName: \"kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.251269 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.251284 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.322375 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.322804 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="dnsmasq-dns" containerID="cri-o://d3d901905e12d0c408a25d66c004157974c39268e85b0f27ab79f0a6f2fa9446" gracePeriod=10 Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.326468 4754 scope.go:117] "RemoveContainer" containerID="a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.352812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.352866 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.352917 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.352952 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.352988 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353013 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5m4\" (UniqueName: \"kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353073 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxsb\" (UniqueName: \"kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353147 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353171 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353190 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353234 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.353768 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.374002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.374700 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5m4\" (UniqueName: \"kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.376859 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.377943 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.472891 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.489237 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.489814 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbxsb\" (UniqueName: \"kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.490022 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.490139 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.490351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.491038 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.505711 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.506408 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.491828 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.531415 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.532024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.534580 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.534722 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.573162 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbxsb\" (UniqueName: \"kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb\") pod \"ceilometer-0\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.771802 4754 scope.go:117] "RemoveContainer" containerID="f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.783960 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d\": container with ID starting with f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d not found: ID does not exist" containerID="f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.784323 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d"} err="failed to get container status \"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d\": rpc error: code = NotFound desc = could not find container \"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d\": container with ID starting with f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d not found: ID does not exist" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.784352 4754 scope.go:117] "RemoveContainer" containerID="a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" Jan 05 20:30:56 crc kubenswrapper[4754]: E0105 20:30:56.786068 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a\": container with ID starting with a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a not found: ID does not exist" containerID="a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.786107 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a"} err="failed to get container status \"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a\": rpc error: code = NotFound desc = could not find container \"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a\": container with ID starting with a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a not found: ID does not exist" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.786126 4754 scope.go:117] "RemoveContainer" containerID="f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.786592 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d"} err="failed to get container status \"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d\": rpc error: code = NotFound desc = could not find container \"f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d\": container with ID starting with f2fb85681b48673c21bcfb9ca248d98f63f063dbb1e1924f4d900b9eb784ca8d not found: ID does not exist" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.786642 4754 scope.go:117] "RemoveContainer" containerID="a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.786940 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a"} err="failed to get container status \"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a\": rpc error: code = NotFound desc = could not find container \"a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a\": container with ID starting with a51c0e09e7dfc88105691972572a51e6c6edc3fcd02e34382b6f9b358bc5e39a not found: ID does not exist" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.814575 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.838544 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.237:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.839074 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.237:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:30:56 crc kubenswrapper[4754]: I0105 20:30:56.910867 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.022302 4754 generic.go:334] "Generic (PLEG): container finished" podID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerID="d3d901905e12d0c408a25d66c004157974c39268e85b0f27ab79f0a6f2fa9446" exitCode=0 Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.022593 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" event={"ID":"42a612ab-0883-4f35-b15e-f937f1f2de36","Type":"ContainerDied","Data":"d3d901905e12d0c408a25d66c004157974c39268e85b0f27ab79f0a6f2fa9446"} Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.037224 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts\") pod \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.037475 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvbtg\" (UniqueName: \"kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg\") pod \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\" (UID: \"95650db6-2a7d-4c11-985e-4eae13a8cbaa\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.039628 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95650db6-2a7d-4c11-985e-4eae13a8cbaa" (UID: "95650db6-2a7d-4c11-985e-4eae13a8cbaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.058899 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg" (OuterVolumeSpecName: "kube-api-access-lvbtg") pod "95650db6-2a7d-4c11-985e-4eae13a8cbaa" (UID: "95650db6-2a7d-4c11-985e-4eae13a8cbaa"). InnerVolumeSpecName "kube-api-access-lvbtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.075833 4754 generic.go:334] "Generic (PLEG): container finished" podID="4cce9715-2736-4de9-978c-355b04462cc6" containerID="96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5" exitCode=0 Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.075896 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerDied","Data":"96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5"} Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.120350 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6fb0-account-create-update-9z9ll" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.122057 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6fb0-account-create-update-9z9ll" event={"ID":"95650db6-2a7d-4c11-985e-4eae13a8cbaa","Type":"ContainerDied","Data":"b341a698d25a028020e9532b50b579bfbc50a96bc0d16e9cbb05ac4add98f69b"} Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.122107 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b341a698d25a028020e9532b50b579bfbc50a96bc0d16e9cbb05ac4add98f69b" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.155838 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95650db6-2a7d-4c11-985e-4eae13a8cbaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.155861 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvbtg\" (UniqueName: \"kubernetes.io/projected/95650db6-2a7d-4c11-985e-4eae13a8cbaa-kube-api-access-lvbtg\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.175002 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t2948" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.217144 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.258792 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts\") pod \"2863d98c-728e-45a8-8137-50decec5ac8b\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.258922 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ggld\" (UniqueName: \"kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld\") pod \"2863d98c-728e-45a8-8137-50decec5ac8b\" (UID: \"2863d98c-728e-45a8-8137-50decec5ac8b\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.259699 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2863d98c-728e-45a8-8137-50decec5ac8b" (UID: "2863d98c-728e-45a8-8137-50decec5ac8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.261040 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2863d98c-728e-45a8-8137-50decec5ac8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.263847 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld" (OuterVolumeSpecName: "kube-api-access-6ggld") pod "2863d98c-728e-45a8-8137-50decec5ac8b" (UID: "2863d98c-728e-45a8-8137-50decec5ac8b"). InnerVolumeSpecName "kube-api-access-6ggld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362315 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqsm\" (UniqueName: \"kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362508 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362562 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362732 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.362769 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc\") pod \"42a612ab-0883-4f35-b15e-f937f1f2de36\" (UID: \"42a612ab-0883-4f35-b15e-f937f1f2de36\") " Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.363279 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ggld\" (UniqueName: \"kubernetes.io/projected/2863d98c-728e-45a8-8137-50decec5ac8b-kube-api-access-6ggld\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.379578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm" (OuterVolumeSpecName: "kube-api-access-zqqsm") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "kube-api-access-zqqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.449409 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.459751 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config" (OuterVolumeSpecName: "config") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.465923 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.465953 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqsm\" (UniqueName: \"kubernetes.io/projected/42a612ab-0883-4f35-b15e-f937f1f2de36-kube-api-access-zqqsm\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.465986 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.468805 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.484752 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.499785 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.527099 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "42a612ab-0883-4f35-b15e-f937f1f2de36" (UID: "42a612ab-0883-4f35-b15e-f937f1f2de36"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.570886 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.570919 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.570928 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42a612ab-0883-4f35-b15e-f937f1f2de36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.609156 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a883f418-09dd-46df-9bcd-ec4cae97b846" path="/var/lib/kubelet/pods/a883f418-09dd-46df-9bcd-ec4cae97b846/volumes" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.610056 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0814e6-2a69-4ee4-979b-3f2b8ffdb941" path="/var/lib/kubelet/pods/ae0814e6-2a69-4ee4-979b-3f2b8ffdb941/volumes" Jan 05 20:30:57 crc kubenswrapper[4754]: I0105 20:30:57.731178 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.139729 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" event={"ID":"42a612ab-0883-4f35-b15e-f937f1f2de36","Type":"ContainerDied","Data":"f46c8a1f4792bb0265b536f2c73b206415f3523ceb643459fe16be4ab1c91809"} Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.139994 4754 scope.go:117] "RemoveContainer" containerID="d3d901905e12d0c408a25d66c004157974c39268e85b0f27ab79f0a6f2fa9446" Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.139930 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-p7xv4" Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.143926 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t2948" event={"ID":"2863d98c-728e-45a8-8137-50decec5ac8b","Type":"ContainerDied","Data":"32c1d28d0794bcf71f3a797f1ffdf66afee25867a1cf8454998b4a1734398b15"} Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.143970 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c1d28d0794bcf71f3a797f1ffdf66afee25867a1cf8454998b4a1734398b15" Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.144050 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t2948" Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.154819 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerStarted","Data":"c55f639767092c55da7e4647ecbde0a622e5de7504d73368d030c71d47dabc43"} Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.154870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerStarted","Data":"1a9bac0f12b1a2328c30e3844a09891bcc7a11c8794f8a2fa85f2a80642a9765"} Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.156817 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerStarted","Data":"83a45036dc640aa901b2cde203ef569d2427acbd9d130d7b3e7add205b68ecc6"} Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.169880 4754 scope.go:117] "RemoveContainer" containerID="c9dcaff57c3d42a7e9cb3935e1ef9004f8ab48613135105a010dc2c97e7610f4" Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.181332 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.204678 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-p7xv4"] Jan 05 20:30:58 crc kubenswrapper[4754]: I0105 20:30:58.214531 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ls52t" podStartSLOduration=4.349011623 podStartE2EDuration="7.214515093s" podCreationTimestamp="2026-01-05 20:30:51 +0000 UTC" firstStartedPulling="2026-01-05 20:30:54.888932862 +0000 UTC m=+1541.598116746" lastFinishedPulling="2026-01-05 20:30:57.754436342 +0000 UTC m=+1544.463620216" observedRunningTime="2026-01-05 20:30:58.207525649 +0000 UTC m=+1544.916709513" watchObservedRunningTime="2026-01-05 20:30:58.214515093 +0000 UTC m=+1544.923698967" Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.180056 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerStarted","Data":"1d483d777a99947ac9750900effcbf7c2fa3338f06d1baacacd7f37f5fc2b8df"} Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.183397 4754 generic.go:334] "Generic (PLEG): container finished" podID="89edcd71-f0ac-4bf6-a1cf-9aac0d041906" containerID="61f5a488850b73c362d6b88ed4ee7d3f254b08dc53bfea5806ebd9df9fdebb1b" exitCode=0 Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.183441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mgrzb" event={"ID":"89edcd71-f0ac-4bf6-a1cf-9aac0d041906","Type":"ContainerDied","Data":"61f5a488850b73c362d6b88ed4ee7d3f254b08dc53bfea5806ebd9df9fdebb1b"} Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.185582 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerStarted","Data":"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda"} Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.187838 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerStarted","Data":"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132"} Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.210698 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.21067956 podStartE2EDuration="4.21067956s" podCreationTimestamp="2026-01-05 20:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:30:59.200452661 +0000 UTC m=+1545.909636535" watchObservedRunningTime="2026-01-05 20:30:59.21067956 +0000 UTC m=+1545.919863434" Jan 05 20:30:59 crc kubenswrapper[4754]: I0105 20:30:59.599713 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" path="/var/lib/kubelet/pods/42a612ab-0883-4f35-b15e-f937f1f2de36/volumes" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.200540 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerStarted","Data":"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd"} Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.200802 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerStarted","Data":"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326"} Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.616175 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.655282 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njr7\" (UniqueName: \"kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7\") pod \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.655689 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data\") pod \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.655778 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle\") pod \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.655932 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts\") pod \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\" (UID: \"89edcd71-f0ac-4bf6-a1cf-9aac0d041906\") " Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.670155 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7" (OuterVolumeSpecName: "kube-api-access-7njr7") pod "89edcd71-f0ac-4bf6-a1cf-9aac0d041906" (UID: "89edcd71-f0ac-4bf6-a1cf-9aac0d041906"). InnerVolumeSpecName "kube-api-access-7njr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.675608 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts" (OuterVolumeSpecName: "scripts") pod "89edcd71-f0ac-4bf6-a1cf-9aac0d041906" (UID: "89edcd71-f0ac-4bf6-a1cf-9aac0d041906"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.699009 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data" (OuterVolumeSpecName: "config-data") pod "89edcd71-f0ac-4bf6-a1cf-9aac0d041906" (UID: "89edcd71-f0ac-4bf6-a1cf-9aac0d041906"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.702493 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89edcd71-f0ac-4bf6-a1cf-9aac0d041906" (UID: "89edcd71-f0ac-4bf6-a1cf-9aac0d041906"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.765469 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.765514 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.765525 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njr7\" (UniqueName: \"kubernetes.io/projected/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-kube-api-access-7njr7\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:00 crc kubenswrapper[4754]: I0105 20:31:00.765536 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89edcd71-f0ac-4bf6-a1cf-9aac0d041906-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.226577 4754 generic.go:334] "Generic (PLEG): container finished" podID="4556aa44-d7b9-4f17-82e1-7c08aee8d75c" containerID="216da0363fa04b0f41f940c392bc4fa5b8d76cdd41269d5c17b808738e5e0485" exitCode=0 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.226653 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" event={"ID":"4556aa44-d7b9-4f17-82e1-7c08aee8d75c","Type":"ContainerDied","Data":"216da0363fa04b0f41f940c392bc4fa5b8d76cdd41269d5c17b808738e5e0485"} Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.229159 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mgrzb" event={"ID":"89edcd71-f0ac-4bf6-a1cf-9aac0d041906","Type":"ContainerDied","Data":"58bc5c17b6fbc937c7a44f43c5f7625df1c280afd13995c1ca732e503858175b"} Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.229214 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58bc5c17b6fbc937c7a44f43c5f7625df1c280afd13995c1ca732e503858175b" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.229281 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mgrzb" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.420989 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.421736 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-log" containerID="cri-o://3d3a74b47c2547315d04878e141c9c9adce046f72463723dfb9d691118d6a302" gracePeriod=30 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.422245 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-api" containerID="cri-o://95340ea5380f918ceaf437d7bfce1b3a100424e933dd9197baa29c658a2f8e7a" gracePeriod=30 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.434494 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.435461 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="07fcdc38-02c8-43b4-908f-dc32a017584c" containerName="nova-scheduler-scheduler" containerID="cri-o://9a7f15d5c585b26a881f865c85b92e4ce38e5cb2459dbfd41df640fe093ee4bb" gracePeriod=30 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.467811 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.468075 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-log" containerID="cri-o://c55f639767092c55da7e4647ecbde0a622e5de7504d73368d030c71d47dabc43" gracePeriod=30 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.470446 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-metadata" containerID="cri-o://1d483d777a99947ac9750900effcbf7c2fa3338f06d1baacacd7f37f5fc2b8df" gracePeriod=30 Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.474388 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.474432 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.515857 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.515911 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:01 crc kubenswrapper[4754]: I0105 20:31:01.632182 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.258021 4754 generic.go:334] "Generic (PLEG): container finished" podID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerID="1d483d777a99947ac9750900effcbf7c2fa3338f06d1baacacd7f37f5fc2b8df" exitCode=0 Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.258058 4754 generic.go:334] "Generic (PLEG): container finished" podID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerID="c55f639767092c55da7e4647ecbde0a622e5de7504d73368d030c71d47dabc43" exitCode=143 Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.258111 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerDied","Data":"1d483d777a99947ac9750900effcbf7c2fa3338f06d1baacacd7f37f5fc2b8df"} Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.258180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerDied","Data":"c55f639767092c55da7e4647ecbde0a622e5de7504d73368d030c71d47dabc43"} Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.270362 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerStarted","Data":"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7"} Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.270815 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.279690 4754 generic.go:334] "Generic (PLEG): container finished" podID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerID="3d3a74b47c2547315d04878e141c9c9adce046f72463723dfb9d691118d6a302" exitCode=143 Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.279725 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerDied","Data":"3d3a74b47c2547315d04878e141c9c9adce046f72463723dfb9d691118d6a302"} Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.286181 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-z2gzv"] Jan 05 20:31:02 crc kubenswrapper[4754]: E0105 20:31:02.286880 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95650db6-2a7d-4c11-985e-4eae13a8cbaa" containerName="mariadb-account-create-update" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.286904 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="95650db6-2a7d-4c11-985e-4eae13a8cbaa" containerName="mariadb-account-create-update" Jan 05 20:31:02 crc kubenswrapper[4754]: E0105 20:31:02.286962 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="dnsmasq-dns" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.286971 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="dnsmasq-dns" Jan 05 20:31:02 crc kubenswrapper[4754]: E0105 20:31:02.286987 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="init" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287005 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="init" Jan 05 20:31:02 crc kubenswrapper[4754]: E0105 20:31:02.287022 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89edcd71-f0ac-4bf6-a1cf-9aac0d041906" containerName="nova-manage" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287030 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="89edcd71-f0ac-4bf6-a1cf-9aac0d041906" containerName="nova-manage" Jan 05 20:31:02 crc kubenswrapper[4754]: E0105 20:31:02.287045 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2863d98c-728e-45a8-8137-50decec5ac8b" containerName="mariadb-database-create" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287052 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2863d98c-728e-45a8-8137-50decec5ac8b" containerName="mariadb-database-create" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287316 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a612ab-0883-4f35-b15e-f937f1f2de36" containerName="dnsmasq-dns" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287348 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="95650db6-2a7d-4c11-985e-4eae13a8cbaa" containerName="mariadb-account-create-update" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287371 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="89edcd71-f0ac-4bf6-a1cf-9aac0d041906" containerName="nova-manage" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.287382 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2863d98c-728e-45a8-8137-50decec5ac8b" containerName="mariadb-database-create" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.288356 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.290177 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq9pz" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.290579 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.292925 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.292981 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.305707 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-z2gzv"] Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.381023 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.920014557 podStartE2EDuration="6.379797272s" podCreationTimestamp="2026-01-05 20:30:56 +0000 UTC" firstStartedPulling="2026-01-05 20:30:57.732806964 +0000 UTC m=+1544.441990838" lastFinishedPulling="2026-01-05 20:31:01.192589679 +0000 UTC m=+1547.901773553" observedRunningTime="2026-01-05 20:31:02.35457849 +0000 UTC m=+1549.063762364" watchObservedRunningTime="2026-01-05 20:31:02.379797272 +0000 UTC m=+1549.088981146" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.414004 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.414148 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.414170 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.414215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqffd\" (UniqueName: \"kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.516712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.516841 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.516863 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.516911 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqffd\" (UniqueName: \"kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.534330 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.534768 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.535940 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.550872 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqffd\" (UniqueName: \"kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd\") pod \"aodh-db-sync-z2gzv\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:02 crc kubenswrapper[4754]: I0105 20:31:02.708941 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.016879 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031065 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031283 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk5m4\" (UniqueName: \"kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031394 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031458 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031513 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.031607 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs" (OuterVolumeSpecName: "logs") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.032093 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c58db7f-dc9d-4cd0-adf9-21feff78070e-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.047328 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.070387 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4" (OuterVolumeSpecName: "kube-api-access-gk5m4") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "kube-api-access-gk5m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.135467 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.136107 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle\") pod \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.136208 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts\") pod \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.136879 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data\") pod \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.137026 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") pod \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\" (UID: \"4c58db7f-dc9d-4cd0-adf9-21feff78070e\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.137106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47nz\" (UniqueName: \"kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz\") pod \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\" (UID: \"4556aa44-d7b9-4f17-82e1-7c08aee8d75c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: W0105 20:31:03.137687 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4c58db7f-dc9d-4cd0-adf9-21feff78070e/volumes/kubernetes.io~secret/combined-ca-bundle Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.137717 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.138187 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk5m4\" (UniqueName: \"kubernetes.io/projected/4c58db7f-dc9d-4cd0-adf9-21feff78070e-kube-api-access-gk5m4\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.138206 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.145565 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts" (OuterVolumeSpecName: "scripts") pod "4556aa44-d7b9-4f17-82e1-7c08aee8d75c" (UID: "4556aa44-d7b9-4f17-82e1-7c08aee8d75c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.149201 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data" (OuterVolumeSpecName: "config-data") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.150999 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c58db7f-dc9d-4cd0-adf9-21feff78070e" (UID: "4c58db7f-dc9d-4cd0-adf9-21feff78070e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.152706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz" (OuterVolumeSpecName: "kube-api-access-q47nz") pod "4556aa44-d7b9-4f17-82e1-7c08aee8d75c" (UID: "4556aa44-d7b9-4f17-82e1-7c08aee8d75c"). InnerVolumeSpecName "kube-api-access-q47nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.197465 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data" (OuterVolumeSpecName: "config-data") pod "4556aa44-d7b9-4f17-82e1-7c08aee8d75c" (UID: "4556aa44-d7b9-4f17-82e1-7c08aee8d75c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.208092 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4556aa44-d7b9-4f17-82e1-7c08aee8d75c" (UID: "4556aa44-d7b9-4f17-82e1-7c08aee8d75c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.245759 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47nz\" (UniqueName: \"kubernetes.io/projected/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-kube-api-access-q47nz\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.245790 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.245801 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.249511 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4556aa44-d7b9-4f17-82e1-7c08aee8d75c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.249528 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.249571 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c58db7f-dc9d-4cd0-adf9-21feff78070e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.314073 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c58db7f-dc9d-4cd0-adf9-21feff78070e","Type":"ContainerDied","Data":"1a9bac0f12b1a2328c30e3844a09891bcc7a11c8794f8a2fa85f2a80642a9765"} Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.314128 4754 scope.go:117] "RemoveContainer" containerID="1d483d777a99947ac9750900effcbf7c2fa3338f06d1baacacd7f37f5fc2b8df" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.314280 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.321967 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.322571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hd9mq" event={"ID":"4556aa44-d7b9-4f17-82e1-7c08aee8d75c","Type":"ContainerDied","Data":"03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e"} Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.322620 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b606c3b4aa9139d845563ab07b3b834f7f2f2d4363db3ff758a18e61f1413e" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.339918 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: E0105 20:31:03.340686 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-metadata" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.340705 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-metadata" Jan 05 20:31:03 crc kubenswrapper[4754]: E0105 20:31:03.340728 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-log" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.340734 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-log" Jan 05 20:31:03 crc kubenswrapper[4754]: E0105 20:31:03.340772 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4556aa44-d7b9-4f17-82e1-7c08aee8d75c" containerName="nova-cell1-conductor-db-sync" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.340779 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4556aa44-d7b9-4f17-82e1-7c08aee8d75c" containerName="nova-cell1-conductor-db-sync" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.341030 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4556aa44-d7b9-4f17-82e1-7c08aee8d75c" containerName="nova-cell1-conductor-db-sync" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.341042 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-metadata" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.341068 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" containerName="nova-metadata-log" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.341892 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.341964 4754 generic.go:334] "Generic (PLEG): container finished" podID="07fcdc38-02c8-43b4-908f-dc32a017584c" containerID="9a7f15d5c585b26a881f865c85b92e4ce38e5cb2459dbfd41df640fe093ee4bb" exitCode=0 Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.342500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07fcdc38-02c8-43b4-908f-dc32a017584c","Type":"ContainerDied","Data":"9a7f15d5c585b26a881f865c85b92e4ce38e5cb2459dbfd41df640fe093ee4bb"} Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.343726 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.391816 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.401468 4754 scope.go:117] "RemoveContainer" containerID="c55f639767092c55da7e4647ecbde0a622e5de7504d73368d030c71d47dabc43" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.414789 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-z2gzv"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.450434 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.456010 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.456098 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.456263 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5znf\" (UniqueName: \"kubernetes.io/projected/6f5143f1-98d6-499b-88e3-4d256176784f-kube-api-access-g5znf\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.489066 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.513468 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.529318 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.531834 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.532107 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.538240 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.558511 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5znf\" (UniqueName: \"kubernetes.io/projected/6f5143f1-98d6-499b-88e3-4d256176784f-kube-api-access-g5znf\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.558632 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.575964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.576388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.576531 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wdmw\" (UniqueName: \"kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.576578 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.576623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.576681 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.580787 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5znf\" (UniqueName: \"kubernetes.io/projected/6f5143f1-98d6-499b-88e3-4d256176784f-kube-api-access-g5znf\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.580944 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.581388 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5143f1-98d6-499b-88e3-4d256176784f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6f5143f1-98d6-499b-88e3-4d256176784f\") " pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.600810 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:31:03 crc kubenswrapper[4754]: E0105 20:31:03.601506 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.625937 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c58db7f-dc9d-4cd0-adf9-21feff78070e" path="/var/lib/kubelet/pods/4c58db7f-dc9d-4cd0-adf9-21feff78070e/volumes" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.679672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.679743 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.679947 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.680034 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wdmw\" (UniqueName: \"kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.680062 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.685632 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.688137 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.689884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.695739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.700334 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wdmw\" (UniqueName: \"kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw\") pod \"nova-metadata-0\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.708908 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.711428 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.783072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2n5k\" (UniqueName: \"kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k\") pod \"07fcdc38-02c8-43b4-908f-dc32a017584c\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.783299 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle\") pod \"07fcdc38-02c8-43b4-908f-dc32a017584c\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.783461 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data\") pod \"07fcdc38-02c8-43b4-908f-dc32a017584c\" (UID: \"07fcdc38-02c8-43b4-908f-dc32a017584c\") " Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.802063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k" (OuterVolumeSpecName: "kube-api-access-f2n5k") pod "07fcdc38-02c8-43b4-908f-dc32a017584c" (UID: "07fcdc38-02c8-43b4-908f-dc32a017584c"). InnerVolumeSpecName "kube-api-access-f2n5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.816859 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data" (OuterVolumeSpecName: "config-data") pod "07fcdc38-02c8-43b4-908f-dc32a017584c" (UID: "07fcdc38-02c8-43b4-908f-dc32a017584c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.829243 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07fcdc38-02c8-43b4-908f-dc32a017584c" (UID: "07fcdc38-02c8-43b4-908f-dc32a017584c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.877897 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.889714 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.889899 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07fcdc38-02c8-43b4-908f-dc32a017584c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:03 crc kubenswrapper[4754]: I0105 20:31:03.889978 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2n5k\" (UniqueName: \"kubernetes.io/projected/07fcdc38-02c8-43b4-908f-dc32a017584c-kube-api-access-f2n5k\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.290965 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.388981 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6f5143f1-98d6-499b-88e3-4d256176784f","Type":"ContainerStarted","Data":"9d319915474261fd5409471246bf499dff5ccc8c6fe5fdaf1e1122bd5c1baa9a"} Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.390732 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07fcdc38-02c8-43b4-908f-dc32a017584c","Type":"ContainerDied","Data":"31bed884007d761ce526074879bf436f852e9b66a0b80cfbb40e918429a57c0a"} Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.390763 4754 scope.go:117] "RemoveContainer" containerID="9a7f15d5c585b26a881f865c85b92e4ce38e5cb2459dbfd41df640fe093ee4bb" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.390895 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.400410 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z2gzv" event={"ID":"25d741bd-ab72-4eff-9d37-8ecf50a50698","Type":"ContainerStarted","Data":"872eecc1bff5a57a02cc23cea750e3407e0dbb68236fb9ae015be22ccc3f7f3b"} Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.430592 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.445054 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.473618 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: E0105 20:31:04.474249 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fcdc38-02c8-43b4-908f-dc32a017584c" containerName="nova-scheduler-scheduler" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.474275 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fcdc38-02c8-43b4-908f-dc32a017584c" containerName="nova-scheduler-scheduler" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.474615 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fcdc38-02c8-43b4-908f-dc32a017584c" containerName="nova-scheduler-scheduler" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.475568 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.481782 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.497678 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.510179 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8j7w\" (UniqueName: \"kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.510475 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.510563 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.522390 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.613642 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8j7w\" (UniqueName: \"kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.613812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.613845 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.618087 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.624265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.636366 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8j7w\" (UniqueName: \"kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w\") pod \"nova-scheduler-0\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:04 crc kubenswrapper[4754]: I0105 20:31:04.812865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.432625 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerStarted","Data":"8b8bd501fac7896c908f66daa74bb70d6c9415d9d095164c7ed8a11a0a69e856"} Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.433417 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerStarted","Data":"bfc0ef6271cf89851d8d9071fc98d40f1c710ede1325a98892b81ada0b220cf1"} Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.439577 4754 generic.go:334] "Generic (PLEG): container finished" podID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerID="95340ea5380f918ceaf437d7bfce1b3a100424e933dd9197baa29c658a2f8e7a" exitCode=0 Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.439714 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerDied","Data":"95340ea5380f918ceaf437d7bfce1b3a100424e933dd9197baa29c658a2f8e7a"} Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.465672 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6f5143f1-98d6-499b-88e3-4d256176784f","Type":"ContainerStarted","Data":"8e7e124bbda37ffc4109ae673fcf0e9f70a70174559234aeccd422551b4518c7"} Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.466440 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.490455 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.49043681 podStartE2EDuration="2.49043681s" podCreationTimestamp="2026-01-05 20:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:05.488583671 +0000 UTC m=+1552.197767545" watchObservedRunningTime="2026-01-05 20:31:05.49043681 +0000 UTC m=+1552.199620684" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.512481 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.646043 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data\") pod \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.646075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle\") pod \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.646106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs\") pod \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.646176 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stqww\" (UniqueName: \"kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww\") pod \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\" (UID: \"2ba5c5f3-19fb-4bd8-b457-976338fd6db4\") " Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.648048 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs" (OuterVolumeSpecName: "logs") pod "2ba5c5f3-19fb-4bd8-b457-976338fd6db4" (UID: "2ba5c5f3-19fb-4bd8-b457-976338fd6db4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.652175 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww" (OuterVolumeSpecName: "kube-api-access-stqww") pod "2ba5c5f3-19fb-4bd8-b457-976338fd6db4" (UID: "2ba5c5f3-19fb-4bd8-b457-976338fd6db4"). InnerVolumeSpecName "kube-api-access-stqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.657457 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fcdc38-02c8-43b4-908f-dc32a017584c" path="/var/lib/kubelet/pods/07fcdc38-02c8-43b4-908f-dc32a017584c/volumes" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.689172 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ba5c5f3-19fb-4bd8-b457-976338fd6db4" (UID: "2ba5c5f3-19fb-4bd8-b457-976338fd6db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.694375 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data" (OuterVolumeSpecName: "config-data") pod "2ba5c5f3-19fb-4bd8-b457-976338fd6db4" (UID: "2ba5c5f3-19fb-4bd8-b457-976338fd6db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.748425 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.748458 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.748469 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.748479 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stqww\" (UniqueName: \"kubernetes.io/projected/2ba5c5f3-19fb-4bd8-b457-976338fd6db4-kube-api-access-stqww\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:05 crc kubenswrapper[4754]: I0105 20:31:05.797582 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.482263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ba5c5f3-19fb-4bd8-b457-976338fd6db4","Type":"ContainerDied","Data":"c03b358dba8062837508523d69ee3b6dc046427bf277b1c726a0db05eec75c6b"} Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.482971 4754 scope.go:117] "RemoveContainer" containerID="95340ea5380f918ceaf437d7bfce1b3a100424e933dd9197baa29c658a2f8e7a" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.483157 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.493258 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerStarted","Data":"a97ed84d51f82d64512ed7ceb1f4ba52e6804bd89f4f73f7e98edfe95d07939b"} Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.496662 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e603b43-937b-4fb6-8b7b-e4f93e03b069","Type":"ContainerStarted","Data":"fb57b23d3e04615db9718a37d743e238b46483f8db03070ba10b7c9deba339f7"} Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.496683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e603b43-937b-4fb6-8b7b-e4f93e03b069","Type":"ContainerStarted","Data":"f61b57a8c0f6c410272fed6b7ecc0cea8eb1355da16202ad4d7a215ac660ea63"} Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.520697 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.520682172 podStartE2EDuration="3.520682172s" podCreationTimestamp="2026-01-05 20:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:06.512714872 +0000 UTC m=+1553.221898746" watchObservedRunningTime="2026-01-05 20:31:06.520682172 +0000 UTC m=+1553.229866046" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.537389 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.537326089 podStartE2EDuration="2.537326089s" podCreationTimestamp="2026-01-05 20:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:06.533069157 +0000 UTC m=+1553.242253051" watchObservedRunningTime="2026-01-05 20:31:06.537326089 +0000 UTC m=+1553.246509963" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.551318 4754 scope.go:117] "RemoveContainer" containerID="3d3a74b47c2547315d04878e141c9c9adce046f72463723dfb9d691118d6a302" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.578226 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.600802 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.618810 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:06 crc kubenswrapper[4754]: E0105 20:31:06.620954 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-log" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.620974 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-log" Jan 05 20:31:06 crc kubenswrapper[4754]: E0105 20:31:06.620992 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-api" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.620999 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-api" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.621228 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-api" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.621243 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" containerName="nova-api-log" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.622733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.624711 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.635754 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.771714 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lfb\" (UniqueName: \"kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.771800 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.771820 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.772085 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.874172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lfb\" (UniqueName: \"kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.874228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.874253 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.874436 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.874708 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.879091 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.879278 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.897120 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lfb\" (UniqueName: \"kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb\") pod \"nova-api-0\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " pod="openstack/nova-api-0" Jan 05 20:31:06 crc kubenswrapper[4754]: I0105 20:31:06.960169 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:07 crc kubenswrapper[4754]: I0105 20:31:07.603851 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba5c5f3-19fb-4bd8-b457-976338fd6db4" path="/var/lib/kubelet/pods/2ba5c5f3-19fb-4bd8-b457-976338fd6db4/volumes" Jan 05 20:31:08 crc kubenswrapper[4754]: I0105 20:31:08.878956 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:08 crc kubenswrapper[4754]: I0105 20:31:08.880524 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:09 crc kubenswrapper[4754]: I0105 20:31:09.813727 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 20:31:10 crc kubenswrapper[4754]: I0105 20:31:10.553156 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z2gzv" event={"ID":"25d741bd-ab72-4eff-9d37-8ecf50a50698","Type":"ContainerStarted","Data":"45874373e82966fda92dbba58cc57ecb378f59a660fae9d37eeda6f3aff61505"} Jan 05 20:31:10 crc kubenswrapper[4754]: I0105 20:31:10.580624 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-z2gzv" podStartSLOduration=1.874956973 podStartE2EDuration="8.580591536s" podCreationTimestamp="2026-01-05 20:31:02 +0000 UTC" firstStartedPulling="2026-01-05 20:31:03.404603022 +0000 UTC m=+1550.113786896" lastFinishedPulling="2026-01-05 20:31:10.110237585 +0000 UTC m=+1556.819421459" observedRunningTime="2026-01-05 20:31:10.569592387 +0000 UTC m=+1557.278776261" watchObservedRunningTime="2026-01-05 20:31:10.580591536 +0000 UTC m=+1557.289775420" Jan 05 20:31:10 crc kubenswrapper[4754]: I0105 20:31:10.631272 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.568165 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerStarted","Data":"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082"} Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.568610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerStarted","Data":"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd"} Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.568622 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerStarted","Data":"7f66e905c9ea508c797bb0043e0a5a708aca804b86d2b5608970f329c3f8db80"} Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.579488 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.610413 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.610352245 podStartE2EDuration="5.610352245s" podCreationTimestamp="2026-01-05 20:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:11.598059892 +0000 UTC m=+1558.307243766" watchObservedRunningTime="2026-01-05 20:31:11.610352245 +0000 UTC m=+1558.319536119" Jan 05 20:31:11 crc kubenswrapper[4754]: I0105 20:31:11.668277 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:31:12 crc kubenswrapper[4754]: I0105 20:31:12.583675 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ls52t" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="registry-server" containerID="cri-o://214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132" gracePeriod=2 Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.248693 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.437362 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities\") pod \"4cce9715-2736-4de9-978c-355b04462cc6\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.437538 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content\") pod \"4cce9715-2736-4de9-978c-355b04462cc6\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.437581 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb989\" (UniqueName: \"kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989\") pod \"4cce9715-2736-4de9-978c-355b04462cc6\" (UID: \"4cce9715-2736-4de9-978c-355b04462cc6\") " Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.439133 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities" (OuterVolumeSpecName: "utilities") pod "4cce9715-2736-4de9-978c-355b04462cc6" (UID: "4cce9715-2736-4de9-978c-355b04462cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.452051 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989" (OuterVolumeSpecName: "kube-api-access-tb989") pod "4cce9715-2736-4de9-978c-355b04462cc6" (UID: "4cce9715-2736-4de9-978c-355b04462cc6"). InnerVolumeSpecName "kube-api-access-tb989". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.483072 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cce9715-2736-4de9-978c-355b04462cc6" (UID: "4cce9715-2736-4de9-978c-355b04462cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.539781 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.539819 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb989\" (UniqueName: \"kubernetes.io/projected/4cce9715-2736-4de9-978c-355b04462cc6-kube-api-access-tb989\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.539830 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cce9715-2736-4de9-978c-355b04462cc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.598612 4754 generic.go:334] "Generic (PLEG): container finished" podID="25d741bd-ab72-4eff-9d37-8ecf50a50698" containerID="45874373e82966fda92dbba58cc57ecb378f59a660fae9d37eeda6f3aff61505" exitCode=0 Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.601768 4754 generic.go:334] "Generic (PLEG): container finished" podID="4cce9715-2736-4de9-978c-355b04462cc6" containerID="214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132" exitCode=0 Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.601901 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ls52t" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.607022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z2gzv" event={"ID":"25d741bd-ab72-4eff-9d37-8ecf50a50698","Type":"ContainerDied","Data":"45874373e82966fda92dbba58cc57ecb378f59a660fae9d37eeda6f3aff61505"} Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.607095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerDied","Data":"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132"} Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.607126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ls52t" event={"ID":"4cce9715-2736-4de9-978c-355b04462cc6","Type":"ContainerDied","Data":"e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121"} Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.607155 4754 scope.go:117] "RemoveContainer" containerID="214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.645209 4754 scope.go:117] "RemoveContainer" containerID="96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.679974 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.687315 4754 scope.go:117] "RemoveContainer" containerID="6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.692392 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ls52t"] Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.741786 4754 scope.go:117] "RemoveContainer" containerID="214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132" Jan 05 20:31:13 crc kubenswrapper[4754]: E0105 20:31:13.742593 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132\": container with ID starting with 214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132 not found: ID does not exist" containerID="214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.742637 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132"} err="failed to get container status \"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132\": rpc error: code = NotFound desc = could not find container \"214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132\": container with ID starting with 214afb97c9b7c5f82acdbf891bec492425bcd6b556dbe2a1b9a7cdc7b831a132 not found: ID does not exist" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.742684 4754 scope.go:117] "RemoveContainer" containerID="96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5" Jan 05 20:31:13 crc kubenswrapper[4754]: E0105 20:31:13.743114 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5\": container with ID starting with 96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5 not found: ID does not exist" containerID="96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.743158 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5"} err="failed to get container status \"96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5\": rpc error: code = NotFound desc = could not find container \"96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5\": container with ID starting with 96a3f2221617ed0da38017e2d8a954edaf40c2ece75ca943fcfff287f70f9bb5 not found: ID does not exist" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.743180 4754 scope.go:117] "RemoveContainer" containerID="6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65" Jan 05 20:31:13 crc kubenswrapper[4754]: E0105 20:31:13.745409 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65\": container with ID starting with 6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65 not found: ID does not exist" containerID="6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.745442 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65"} err="failed to get container status \"6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65\": rpc error: code = NotFound desc = could not find container \"6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65\": container with ID starting with 6eef879597f8fa2d2576ecc3985c3809fbe69fb20c678a0c80a5f1ffd9763d65 not found: ID does not exist" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.748215 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.878824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 20:31:13 crc kubenswrapper[4754]: I0105 20:31:13.879404 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 20:31:14 crc kubenswrapper[4754]: I0105 20:31:14.589647 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:31:14 crc kubenswrapper[4754]: E0105 20:31:14.591242 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:31:14 crc kubenswrapper[4754]: I0105 20:31:14.813333 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 20:31:14 crc kubenswrapper[4754]: I0105 20:31:14.865593 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 20:31:14 crc kubenswrapper[4754]: I0105 20:31:14.892434 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:14 crc kubenswrapper[4754]: I0105 20:31:14.892442 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.149093 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.286686 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data\") pod \"25d741bd-ab72-4eff-9d37-8ecf50a50698\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.286803 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts\") pod \"25d741bd-ab72-4eff-9d37-8ecf50a50698\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.286978 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle\") pod \"25d741bd-ab72-4eff-9d37-8ecf50a50698\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.287108 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqffd\" (UniqueName: \"kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd\") pod \"25d741bd-ab72-4eff-9d37-8ecf50a50698\" (UID: \"25d741bd-ab72-4eff-9d37-8ecf50a50698\") " Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.293591 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts" (OuterVolumeSpecName: "scripts") pod "25d741bd-ab72-4eff-9d37-8ecf50a50698" (UID: "25d741bd-ab72-4eff-9d37-8ecf50a50698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.295461 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd" (OuterVolumeSpecName: "kube-api-access-lqffd") pod "25d741bd-ab72-4eff-9d37-8ecf50a50698" (UID: "25d741bd-ab72-4eff-9d37-8ecf50a50698"). InnerVolumeSpecName "kube-api-access-lqffd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.339150 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d741bd-ab72-4eff-9d37-8ecf50a50698" (UID: "25d741bd-ab72-4eff-9d37-8ecf50a50698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.373549 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data" (OuterVolumeSpecName: "config-data") pod "25d741bd-ab72-4eff-9d37-8ecf50a50698" (UID: "25d741bd-ab72-4eff-9d37-8ecf50a50698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.390702 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.390735 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.390747 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d741bd-ab72-4eff-9d37-8ecf50a50698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.390757 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqffd\" (UniqueName: \"kubernetes.io/projected/25d741bd-ab72-4eff-9d37-8ecf50a50698-kube-api-access-lqffd\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.602945 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cce9715-2736-4de9-978c-355b04462cc6" path="/var/lib/kubelet/pods/4cce9715-2736-4de9-978c-355b04462cc6/volumes" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.684424 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-z2gzv" event={"ID":"25d741bd-ab72-4eff-9d37-8ecf50a50698","Type":"ContainerDied","Data":"872eecc1bff5a57a02cc23cea750e3407e0dbb68236fb9ae015be22ccc3f7f3b"} Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.684496 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872eecc1bff5a57a02cc23cea750e3407e0dbb68236fb9ae015be22ccc3f7f3b" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.684615 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-z2gzv" Jan 05 20:31:15 crc kubenswrapper[4754]: I0105 20:31:15.717284 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 20:31:16 crc kubenswrapper[4754]: I0105 20:31:16.961827 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:31:16 crc kubenswrapper[4754]: I0105 20:31:16.964247 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.055278 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:17 crc kubenswrapper[4754]: E0105 20:31:17.056038 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="extract-utilities" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056056 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="extract-utilities" Jan 05 20:31:17 crc kubenswrapper[4754]: E0105 20:31:17.056100 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="registry-server" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056108 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="registry-server" Jan 05 20:31:17 crc kubenswrapper[4754]: E0105 20:31:17.056131 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d741bd-ab72-4eff-9d37-8ecf50a50698" containerName="aodh-db-sync" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056139 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d741bd-ab72-4eff-9d37-8ecf50a50698" containerName="aodh-db-sync" Jan 05 20:31:17 crc kubenswrapper[4754]: E0105 20:31:17.056158 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="extract-content" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056165 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="extract-content" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056499 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cce9715-2736-4de9-978c-355b04462cc6" containerName="registry-server" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.056524 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d741bd-ab72-4eff-9d37-8ecf50a50698" containerName="aodh-db-sync" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.063438 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.070786 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq9pz" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.071058 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.071110 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.085835 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.239926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.240002 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzbc\" (UniqueName: \"kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.240066 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.240115 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.344252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.344356 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grzbc\" (UniqueName: \"kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.344404 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.344446 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.353925 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.354271 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.370166 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.371249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grzbc\" (UniqueName: \"kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc\") pod \"aodh-0\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.390781 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:17 crc kubenswrapper[4754]: W0105 20:31:17.929461 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312545aa_f35e_4c53_89e7_38ecc1d4827f.slice/crio-13b154bfffbdcc998d21905ecc9f94338bf28aa9477736ca26e96fb960dfab19 WatchSource:0}: Error finding container 13b154bfffbdcc998d21905ecc9f94338bf28aa9477736ca26e96fb960dfab19: Status 404 returned error can't find the container with id 13b154bfffbdcc998d21905ecc9f94338bf28aa9477736ca26e96fb960dfab19 Jan 05 20:31:17 crc kubenswrapper[4754]: I0105 20:31:17.943165 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:18 crc kubenswrapper[4754]: I0105 20:31:18.044668 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:18 crc kubenswrapper[4754]: I0105 20:31:18.044927 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:18 crc kubenswrapper[4754]: I0105 20:31:18.744977 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerStarted","Data":"13b154bfffbdcc998d21905ecc9f94338bf28aa9477736ca26e96fb960dfab19"} Jan 05 20:31:18 crc kubenswrapper[4754]: E0105 20:31:18.935204 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.195864 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.196361 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="sg-core" containerID="cri-o://790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd" gracePeriod=30 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.196390 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="proxy-httpd" containerID="cri-o://e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7" gracePeriod=30 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.196531 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-notification-agent" containerID="cri-o://dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326" gracePeriod=30 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.196711 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-central-agent" containerID="cri-o://f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda" gracePeriod=30 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.204949 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.247:3000/\": EOF" Jan 05 20:31:19 crc kubenswrapper[4754]: E0105 20:31:19.422726 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.778252 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerStarted","Data":"8fa3d8567631906e1958b040d2b158b3408cd04c47864372f36649e48dd49e3e"} Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.783721 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerID="e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7" exitCode=0 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.783753 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerID="790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd" exitCode=2 Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.783771 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerDied","Data":"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7"} Jan 05 20:31:19 crc kubenswrapper[4754]: I0105 20:31:19.783795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerDied","Data":"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd"} Jan 05 20:31:20 crc kubenswrapper[4754]: I0105 20:31:20.537905 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:20 crc kubenswrapper[4754]: I0105 20:31:20.827056 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerID="f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda" exitCode=0 Jan 05 20:31:20 crc kubenswrapper[4754]: I0105 20:31:20.827101 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerDied","Data":"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda"} Jan 05 20:31:21 crc kubenswrapper[4754]: I0105 20:31:21.839407 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerStarted","Data":"51ff30be97a60df4f3bc888a61938ef26c7a5c98a3b1c62a880d7a71634afd3e"} Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.699421 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.806372 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.806852 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.806931 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbxsb\" (UniqueName: \"kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.807255 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.807459 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.807499 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.807532 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts\") pod \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\" (UID: \"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1\") " Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.808841 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.809102 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.809620 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.809661 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.817470 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts" (OuterVolumeSpecName: "scripts") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.817832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb" (OuterVolumeSpecName: "kube-api-access-zbxsb") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "kube-api-access-zbxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.849638 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.869737 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerStarted","Data":"09d9d571041847d3276c3bc0fcc994ac00235350edef6c0aa418786fe6ef047b"} Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.877245 4754 generic.go:334] "Generic (PLEG): container finished" podID="3e9d5917-d36c-4f58-8154-787b4a799e88" containerID="a8a0c400c952ce14a4a25665b13f822c63ebd8894ea00abfd8c7fe28dc9372f7" exitCode=137 Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.877324 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e9d5917-d36c-4f58-8154-787b4a799e88","Type":"ContainerDied","Data":"a8a0c400c952ce14a4a25665b13f822c63ebd8894ea00abfd8c7fe28dc9372f7"} Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.882633 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerID="dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326" exitCode=0 Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.882685 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerDied","Data":"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326"} Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.882722 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1","Type":"ContainerDied","Data":"83a45036dc640aa901b2cde203ef569d2427acbd9d130d7b3e7add205b68ecc6"} Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.882740 4754 scope.go:117] "RemoveContainer" containerID="e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.882723 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.889155 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.910364 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.912030 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.912061 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.912071 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbxsb\" (UniqueName: \"kubernetes.io/projected/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-kube-api-access-zbxsb\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.926805 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.953806 4754 scope.go:117] "RemoveContainer" containerID="790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.958621 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.987868 4754 scope.go:117] "RemoveContainer" containerID="dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326" Jan 05 20:31:23 crc kubenswrapper[4754]: I0105 20:31:23.989239 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data" (OuterVolumeSpecName: "config-data") pod "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" (UID: "5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.013830 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.013856 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.018876 4754 scope.go:117] "RemoveContainer" containerID="f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.044763 4754 scope.go:117] "RemoveContainer" containerID="e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.046077 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7\": container with ID starting with e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7 not found: ID does not exist" containerID="e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.046166 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7"} err="failed to get container status \"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7\": rpc error: code = NotFound desc = could not find container \"e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7\": container with ID starting with e858f46f799d28bf897b93e8eac8e767e3054d1bcc6da26b5e5aaa33677ed3a7 not found: ID does not exist" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.046201 4754 scope.go:117] "RemoveContainer" containerID="790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.046942 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd\": container with ID starting with 790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd not found: ID does not exist" containerID="790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.046965 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd"} err="failed to get container status \"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd\": rpc error: code = NotFound desc = could not find container \"790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd\": container with ID starting with 790f194d4034ac087a859df96ac871d99cd4854c57a4f74c1d8e1f13d44e4cbd not found: ID does not exist" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.046983 4754 scope.go:117] "RemoveContainer" containerID="dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.047175 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326\": container with ID starting with dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326 not found: ID does not exist" containerID="dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.047199 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326"} err="failed to get container status \"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326\": rpc error: code = NotFound desc = could not find container \"dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326\": container with ID starting with dc6b9ad0c3d3f38abebc800fa122e29b7fcd21e8648841ac36fb00d1bbd27326 not found: ID does not exist" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.047216 4754 scope.go:117] "RemoveContainer" containerID="f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.047438 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda\": container with ID starting with f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda not found: ID does not exist" containerID="f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.047455 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda"} err="failed to get container status \"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda\": rpc error: code = NotFound desc = could not find container \"f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda\": container with ID starting with f6fe68441e0fae7c0dd65f152badf1118a6348f823d12bacaed6d7f035236bda not found: ID does not exist" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.361260 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.386236 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.398059 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.435527 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data\") pod \"3e9d5917-d36c-4f58-8154-787b4a799e88\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.435627 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57kg\" (UniqueName: \"kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg\") pod \"3e9d5917-d36c-4f58-8154-787b4a799e88\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.435826 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle\") pod \"3e9d5917-d36c-4f58-8154-787b4a799e88\" (UID: \"3e9d5917-d36c-4f58-8154-787b4a799e88\") " Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.444494 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg" (OuterVolumeSpecName: "kube-api-access-v57kg") pod "3e9d5917-d36c-4f58-8154-787b4a799e88" (UID: "3e9d5917-d36c-4f58-8154-787b4a799e88"). InnerVolumeSpecName "kube-api-access-v57kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.480680 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.481685 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9d5917-d36c-4f58-8154-787b4a799e88" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.481708 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9d5917-d36c-4f58-8154-787b4a799e88" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.481753 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-central-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.481760 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-central-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.481794 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="proxy-httpd" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.481801 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="proxy-httpd" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.481814 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="sg-core" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.481820 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="sg-core" Jan 05 20:31:24 crc kubenswrapper[4754]: E0105 20:31:24.481908 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-notification-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.481918 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-notification-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.482341 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-notification-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.482368 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9d5917-d36c-4f58-8154-787b4a799e88" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.482397 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="proxy-httpd" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.482417 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="sg-core" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.482431 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" containerName="ceilometer-central-agent" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.497884 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.500705 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.501263 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.511435 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.518029 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data" (OuterVolumeSpecName: "config-data") pod "3e9d5917-d36c-4f58-8154-787b4a799e88" (UID: "3e9d5917-d36c-4f58-8154-787b4a799e88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.519713 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e9d5917-d36c-4f58-8154-787b4a799e88" (UID: "3e9d5917-d36c-4f58-8154-787b4a799e88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.539789 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.539835 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9d5917-d36c-4f58-8154-787b4a799e88-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.539846 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57kg\" (UniqueName: \"kubernetes.io/projected/3e9d5917-d36c-4f58-8154-787b4a799e88-kube-api-access-v57kg\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.641923 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.641996 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.642022 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.642098 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.642139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpsq\" (UniqueName: \"kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.642167 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.642190 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.743775 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.743869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.743940 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.744084 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.744135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlpsq\" (UniqueName: \"kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.744182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.744208 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.745694 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.745918 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.749810 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.750453 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.752271 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.766934 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.769234 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlpsq\" (UniqueName: \"kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq\") pod \"ceilometer-0\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.908746 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.909442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e9d5917-d36c-4f58-8154-787b4a799e88","Type":"ContainerDied","Data":"52769c65a5951c0697401b21bd4eb09df02729ebb51a777fe75ed7ab95b0fd6a"} Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.909515 4754 scope.go:117] "RemoveContainer" containerID="a8a0c400c952ce14a4a25665b13f822c63ebd8894ea00abfd8c7fe28dc9372f7" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.920578 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.945815 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:24 crc kubenswrapper[4754]: I0105 20:31:24.982524 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.003349 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.018937 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.025847 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.033037 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.033556 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.033686 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.036582 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.156154 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.156253 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.156338 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.156452 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w94n\" (UniqueName: \"kubernetes.io/projected/a8a5dad7-9efb-44e3-b042-c0ba996ee955-kube-api-access-9w94n\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.156511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.258156 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.258246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.258496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w94n\" (UniqueName: \"kubernetes.io/projected/a8a5dad7-9efb-44e3-b042-c0ba996ee955-kube-api-access-9w94n\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.258561 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.258718 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.263442 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.263630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.264835 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.272625 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8a5dad7-9efb-44e3-b042-c0ba996ee955-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.279733 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w94n\" (UniqueName: \"kubernetes.io/projected/a8a5dad7-9efb-44e3-b042-c0ba996ee955-kube-api-access-9w94n\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8a5dad7-9efb-44e3-b042-c0ba996ee955\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.350724 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.612550 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9d5917-d36c-4f58-8154-787b4a799e88" path="/var/lib/kubelet/pods/3e9d5917-d36c-4f58-8154-787b4a799e88/volumes" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.619569 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1" path="/var/lib/kubelet/pods/5d73f0db-d5a5-4bca-94a6-9d610ad2ecf1/volumes" Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.862721 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:25 crc kubenswrapper[4754]: W0105 20:31:25.872568 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1c51ca_6534_4284_be98_c77785fa8a23.slice/crio-31e362cc21840927a8c5a144e6ce52a85ca0a69f046d61effe6541558b9943d9 WatchSource:0}: Error finding container 31e362cc21840927a8c5a144e6ce52a85ca0a69f046d61effe6541558b9943d9: Status 404 returned error can't find the container with id 31e362cc21840927a8c5a144e6ce52a85ca0a69f046d61effe6541558b9943d9 Jan 05 20:31:25 crc kubenswrapper[4754]: I0105 20:31:25.921946 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerStarted","Data":"31e362cc21840927a8c5a144e6ce52a85ca0a69f046d61effe6541558b9943d9"} Jan 05 20:31:26 crc kubenswrapper[4754]: W0105 20:31:26.119511 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8a5dad7_9efb_44e3_b042_c0ba996ee955.slice/crio-5678a68a578a13fd7c9255b1447a51371458a97bb45a4b1fd3244315a7fc3296 WatchSource:0}: Error finding container 5678a68a578a13fd7c9255b1447a51371458a97bb45a4b1fd3244315a7fc3296: Status 404 returned error can't find the container with id 5678a68a578a13fd7c9255b1447a51371458a97bb45a4b1fd3244315a7fc3296 Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.123039 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.940877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8a5dad7-9efb-44e3-b042-c0ba996ee955","Type":"ContainerStarted","Data":"1b190d24485e26e6d58b2078890c5c7a49cd1761ad07100d32c987415ced95c7"} Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.941216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8a5dad7-9efb-44e3-b042-c0ba996ee955","Type":"ContainerStarted","Data":"5678a68a578a13fd7c9255b1447a51371458a97bb45a4b1fd3244315a7fc3296"} Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.965679 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-api" containerID="cri-o://8fa3d8567631906e1958b040d2b158b3408cd04c47864372f36649e48dd49e3e" gracePeriod=30 Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.967118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerStarted","Data":"36ebddc927110f121c1bfe084736ed7990d55bc8423b5493ddc688a03333ea56"} Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.967722 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-listener" containerID="cri-o://36ebddc927110f121c1bfe084736ed7990d55bc8423b5493ddc688a03333ea56" gracePeriod=30 Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.967836 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-notifier" containerID="cri-o://09d9d571041847d3276c3bc0fcc994ac00235350edef6c0aa418786fe6ef047b" gracePeriod=30 Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.967924 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-evaluator" containerID="cri-o://51ff30be97a60df4f3bc888a61938ef26c7a5c98a3b1c62a880d7a71634afd3e" gracePeriod=30 Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.969441 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.969428536 podStartE2EDuration="2.969428536s" podCreationTimestamp="2026-01-05 20:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:26.962116424 +0000 UTC m=+1573.671300298" watchObservedRunningTime="2026-01-05 20:31:26.969428536 +0000 UTC m=+1573.678612400" Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.985052 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 20:31:26 crc kubenswrapper[4754]: I0105 20:31:26.985595 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:26.990470 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.003563 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.049211 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.299193966 podStartE2EDuration="11.049191561s" podCreationTimestamp="2026-01-05 20:31:16 +0000 UTC" firstStartedPulling="2026-01-05 20:31:17.937354058 +0000 UTC m=+1564.646537932" lastFinishedPulling="2026-01-05 20:31:25.687351653 +0000 UTC m=+1572.396535527" observedRunningTime="2026-01-05 20:31:26.990773417 +0000 UTC m=+1573.699957301" watchObservedRunningTime="2026-01-05 20:31:27.049191561 +0000 UTC m=+1573.758375435" Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.976671 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerStarted","Data":"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb"} Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.977456 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerStarted","Data":"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6"} Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.980433 4754 generic.go:334] "Generic (PLEG): container finished" podID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerID="51ff30be97a60df4f3bc888a61938ef26c7a5c98a3b1c62a880d7a71634afd3e" exitCode=0 Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.980464 4754 generic.go:334] "Generic (PLEG): container finished" podID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerID="8fa3d8567631906e1958b040d2b158b3408cd04c47864372f36649e48dd49e3e" exitCode=0 Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.981543 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerDied","Data":"51ff30be97a60df4f3bc888a61938ef26c7a5c98a3b1c62a880d7a71634afd3e"} Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.981574 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerDied","Data":"8fa3d8567631906e1958b040d2b158b3408cd04c47864372f36649e48dd49e3e"} Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.982609 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 20:31:27 crc kubenswrapper[4754]: I0105 20:31:27.989431 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.251055 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.252888 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.293057 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.354711 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflln\" (UniqueName: \"kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.354764 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.354997 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.355110 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.355234 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.355348 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.456976 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.457063 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bflln\" (UniqueName: \"kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.457091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.457165 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.457202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.457254 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.458074 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.458694 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.458756 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.459203 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.459326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.499050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bflln\" (UniqueName: \"kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln\") pod \"dnsmasq-dns-6d99f6bc7f-ddt4h\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.576981 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:28 crc kubenswrapper[4754]: I0105 20:31:28.594229 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:31:28 crc kubenswrapper[4754]: E0105 20:31:28.594536 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:31:29 crc kubenswrapper[4754]: W0105 20:31:29.234232 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e86022_3de4_4405_9bf6_47646f8368e5.slice/crio-afaf7edb386f95deb1953fe8b6de62e1042e1dee669b0c2b0f9c35ff873bd71c WatchSource:0}: Error finding container afaf7edb386f95deb1953fe8b6de62e1042e1dee669b0c2b0f9c35ff873bd71c: Status 404 returned error can't find the container with id afaf7edb386f95deb1953fe8b6de62e1042e1dee669b0c2b0f9c35ff873bd71c Jan 05 20:31:29 crc kubenswrapper[4754]: I0105 20:31:29.238195 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:31:29 crc kubenswrapper[4754]: E0105 20:31:29.762432 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e86022_3de4_4405_9bf6_47646f8368e5.slice/crio-conmon-c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.054748 4754 generic.go:334] "Generic (PLEG): container finished" podID="31e86022-3de4-4405-9bf6-47646f8368e5" containerID="c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e" exitCode=0 Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.054809 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" event={"ID":"31e86022-3de4-4405-9bf6-47646f8368e5","Type":"ContainerDied","Data":"c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e"} Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.054834 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" event={"ID":"31e86022-3de4-4405-9bf6-47646f8368e5","Type":"ContainerStarted","Data":"afaf7edb386f95deb1953fe8b6de62e1042e1dee669b0c2b0f9c35ff873bd71c"} Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.061197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerStarted","Data":"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515"} Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.350900 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:30 crc kubenswrapper[4754]: I0105 20:31:30.797804 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.075890 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" event={"ID":"31e86022-3de4-4405-9bf6-47646f8368e5","Type":"ContainerStarted","Data":"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6"} Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.077139 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.082793 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerStarted","Data":"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e"} Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.082967 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-api" containerID="cri-o://502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082" gracePeriod=30 Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.082960 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-log" containerID="cri-o://584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd" gracePeriod=30 Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.084021 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.112207 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" podStartSLOduration=3.112184374 podStartE2EDuration="3.112184374s" podCreationTimestamp="2026-01-05 20:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:31.101763421 +0000 UTC m=+1577.810947295" watchObservedRunningTime="2026-01-05 20:31:31.112184374 +0000 UTC m=+1577.821368248" Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.142168 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.631575286 podStartE2EDuration="7.142145681s" podCreationTimestamp="2026-01-05 20:31:24 +0000 UTC" firstStartedPulling="2026-01-05 20:31:25.874845196 +0000 UTC m=+1572.584029070" lastFinishedPulling="2026-01-05 20:31:30.385415591 +0000 UTC m=+1577.094599465" observedRunningTime="2026-01-05 20:31:31.131531052 +0000 UTC m=+1577.840714916" watchObservedRunningTime="2026-01-05 20:31:31.142145681 +0000 UTC m=+1577.851329555" Jan 05 20:31:31 crc kubenswrapper[4754]: I0105 20:31:31.677607 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:32 crc kubenswrapper[4754]: I0105 20:31:32.093355 4754 generic.go:334] "Generic (PLEG): container finished" podID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerID="584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd" exitCode=143 Jan 05 20:31:32 crc kubenswrapper[4754]: I0105 20:31:32.093417 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerDied","Data":"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd"} Jan 05 20:31:33 crc kubenswrapper[4754]: I0105 20:31:33.105316 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-central-agent" containerID="cri-o://4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6" gracePeriod=30 Jan 05 20:31:33 crc kubenswrapper[4754]: I0105 20:31:33.105524 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="sg-core" containerID="cri-o://c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515" gracePeriod=30 Jan 05 20:31:33 crc kubenswrapper[4754]: I0105 20:31:33.105577 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="proxy-httpd" containerID="cri-o://a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e" gracePeriod=30 Jan 05 20:31:33 crc kubenswrapper[4754]: I0105 20:31:33.105617 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-notification-agent" containerID="cri-o://97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb" gracePeriod=30 Jan 05 20:31:33 crc kubenswrapper[4754]: E0105 20:31:33.638919 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124389 4754 generic.go:334] "Generic (PLEG): container finished" podID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerID="a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e" exitCode=0 Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124446 4754 generic.go:334] "Generic (PLEG): container finished" podID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerID="c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515" exitCode=2 Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124462 4754 generic.go:334] "Generic (PLEG): container finished" podID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerID="97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb" exitCode=0 Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerDied","Data":"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e"} Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124731 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerDied","Data":"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515"} Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.124755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerDied","Data":"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb"} Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.901131 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980052 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980467 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980519 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980649 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlpsq\" (UniqueName: \"kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980710 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980811 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.980832 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml\") pod \"4e1c51ca-6534-4284-be98-c77785fa8a23\" (UID: \"4e1c51ca-6534-4284-be98-c77785fa8a23\") " Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.983100 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.983597 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.989180 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts" (OuterVolumeSpecName: "scripts") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:34 crc kubenswrapper[4754]: I0105 20:31:34.989470 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq" (OuterVolumeSpecName: "kube-api-access-qlpsq") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "kube-api-access-qlpsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.018265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.067154 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.083098 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.089187 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") pod \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.089352 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data\") pod \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.089611 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6lfb\" (UniqueName: \"kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb\") pod \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.089806 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs\") pod \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.090498 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs" (OuterVolumeSpecName: "logs") pod "1753ceb9-64e3-4fb9-b33c-d39212c708a2" (UID: "1753ceb9-64e3-4fb9-b33c-d39212c708a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.095318 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb" (OuterVolumeSpecName: "kube-api-access-g6lfb") pod "1753ceb9-64e3-4fb9-b33c-d39212c708a2" (UID: "1753ceb9-64e3-4fb9-b33c-d39212c708a2"). InnerVolumeSpecName "kube-api-access-g6lfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102585 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlpsq\" (UniqueName: \"kubernetes.io/projected/4e1c51ca-6534-4284-be98-c77785fa8a23-kube-api-access-qlpsq\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102661 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102682 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102777 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102797 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6lfb\" (UniqueName: \"kubernetes.io/projected/1753ceb9-64e3-4fb9-b33c-d39212c708a2-kube-api-access-g6lfb\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102846 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1c51ca-6534-4284-be98-c77785fa8a23-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102860 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.102871 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1753ceb9-64e3-4fb9-b33c-d39212c708a2-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.134357 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle podName:1753ceb9-64e3-4fb9-b33c-d39212c708a2 nodeName:}" failed. No retries permitted until 2026-01-05 20:31:35.634323835 +0000 UTC m=+1582.343507709 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle") pod "1753ceb9-64e3-4fb9-b33c-d39212c708a2" (UID: "1753ceb9-64e3-4fb9-b33c-d39212c708a2") : error deleting /var/lib/kubelet/pods/1753ceb9-64e3-4fb9-b33c-d39212c708a2/volume-subpaths: remove /var/lib/kubelet/pods/1753ceb9-64e3-4fb9-b33c-d39212c708a2/volume-subpaths: no such file or directory Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.138783 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data" (OuterVolumeSpecName: "config-data") pod "4e1c51ca-6534-4284-be98-c77785fa8a23" (UID: "4e1c51ca-6534-4284-be98-c77785fa8a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.140506 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data" (OuterVolumeSpecName: "config-data") pod "1753ceb9-64e3-4fb9-b33c-d39212c708a2" (UID: "1753ceb9-64e3-4fb9-b33c-d39212c708a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.151027 4754 generic.go:334] "Generic (PLEG): container finished" podID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerID="4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6" exitCode=0 Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.151098 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerDied","Data":"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6"} Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.151128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1c51ca-6534-4284-be98-c77785fa8a23","Type":"ContainerDied","Data":"31e362cc21840927a8c5a144e6ce52a85ca0a69f046d61effe6541558b9943d9"} Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.151145 4754 scope.go:117] "RemoveContainer" containerID="a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.151309 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.156560 4754 generic.go:334] "Generic (PLEG): container finished" podID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerID="502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082" exitCode=0 Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.156670 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerDied","Data":"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082"} Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.156737 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1753ceb9-64e3-4fb9-b33c-d39212c708a2","Type":"ContainerDied","Data":"7f66e905c9ea508c797bb0043e0a5a708aca804b86d2b5608970f329c3f8db80"} Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.156862 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.177749 4754 scope.go:117] "RemoveContainer" containerID="c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.202866 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.204766 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1c51ca-6534-4284-be98-c77785fa8a23-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.204857 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.207127 4754 scope.go:117] "RemoveContainer" containerID="97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.218762 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.232967 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233543 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-notification-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233558 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-notification-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233573 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-log" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233581 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-log" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233588 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-api" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233595 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-api" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233606 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="proxy-httpd" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233612 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="proxy-httpd" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233626 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="sg-core" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233632 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="sg-core" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.233645 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-central-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233651 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-central-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233865 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="proxy-httpd" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233881 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="sg-core" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233894 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-notification-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233902 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-log" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233910 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" containerName="ceilometer-central-agent" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.233924 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" containerName="nova-api-api" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.235864 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.240367 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.242047 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.243764 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.306961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307066 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307163 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpmw\" (UniqueName: \"kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307208 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307215 4754 scope.go:117] "RemoveContainer" containerID="4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.307346 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.333816 4754 scope.go:117] "RemoveContainer" containerID="a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.334668 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e\": container with ID starting with a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e not found: ID does not exist" containerID="a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.334699 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e"} err="failed to get container status \"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e\": rpc error: code = NotFound desc = could not find container \"a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e\": container with ID starting with a9ad0da74ce2a2c7b070390c4ee8f3d2377c49dfa6d21804890fef446a5f283e not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.334740 4754 scope.go:117] "RemoveContainer" containerID="c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.335171 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515\": container with ID starting with c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515 not found: ID does not exist" containerID="c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335216 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515"} err="failed to get container status \"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515\": rpc error: code = NotFound desc = could not find container \"c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515\": container with ID starting with c8bbd22eeda1455bdc17b6f02ace05b5438b349ac596f0237f2be030861a6515 not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335231 4754 scope.go:117] "RemoveContainer" containerID="97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.335625 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb\": container with ID starting with 97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb not found: ID does not exist" containerID="97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335649 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb"} err="failed to get container status \"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb\": rpc error: code = NotFound desc = could not find container \"97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb\": container with ID starting with 97fc31e8e754942929e94796c1795c10b759f1427e9014298d78e9d6dad897bb not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335663 4754 scope.go:117] "RemoveContainer" containerID="4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.335951 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6\": container with ID starting with 4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6 not found: ID does not exist" containerID="4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335971 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6"} err="failed to get container status \"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6\": rpc error: code = NotFound desc = could not find container \"4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6\": container with ID starting with 4cbbdb29b69cfdc13897a57c594f001e1b00571c38b916360745f5874199d8f6 not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.335983 4754 scope.go:117] "RemoveContainer" containerID="502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.351673 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.357730 4754 scope.go:117] "RemoveContainer" containerID="584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.371622 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.385432 4754 scope.go:117] "RemoveContainer" containerID="502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.385921 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082\": container with ID starting with 502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082 not found: ID does not exist" containerID="502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.385960 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082"} err="failed to get container status \"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082\": rpc error: code = NotFound desc = could not find container \"502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082\": container with ID starting with 502b9f46682cd2e8a6730318a3cbbc1486c82b3e69a37d9af1234a31be2e8082 not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.385985 4754 scope.go:117] "RemoveContainer" containerID="584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd" Jan 05 20:31:35 crc kubenswrapper[4754]: E0105 20:31:35.388484 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd\": container with ID starting with 584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd not found: ID does not exist" containerID="584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.388553 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd"} err="failed to get container status \"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd\": rpc error: code = NotFound desc = could not find container \"584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd\": container with ID starting with 584892036e205a2c24afbeda4306ae9014bbfd5746f4d3d71d4150b595be2dbd not found: ID does not exist" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410306 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410334 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410398 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410423 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmpmw\" (UniqueName: \"kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410472 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.410566 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.413030 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.413159 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.416486 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.419729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.419871 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.431958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.435536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmpmw\" (UniqueName: \"kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw\") pod \"ceilometer-0\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.604215 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1c51ca-6534-4284-be98-c77785fa8a23" path="/var/lib/kubelet/pods/4e1c51ca-6534-4284-be98-c77785fa8a23/volumes" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.605071 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.718173 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") pod \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\" (UID: \"1753ceb9-64e3-4fb9-b33c-d39212c708a2\") " Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.727555 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1753ceb9-64e3-4fb9-b33c-d39212c708a2" (UID: "1753ceb9-64e3-4fb9-b33c-d39212c708a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.804470 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.815584 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.821092 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1753ceb9-64e3-4fb9-b33c-d39212c708a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.837470 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.839915 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.842802 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.846466 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.846649 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.846782 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.923736 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.923834 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv26x\" (UniqueName: \"kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.923890 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.923925 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.924005 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:35 crc kubenswrapper[4754]: I0105 20:31:35.924031 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028082 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028134 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028258 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv26x\" (UniqueName: \"kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.028345 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.031574 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.031831 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.032008 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.035404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.035574 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.047479 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv26x\" (UniqueName: \"kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x\") pod \"nova-api-0\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.129787 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.157908 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.185025 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerStarted","Data":"51e32c9053ffd2f8ee839390201c21cbb5b28e0f0c78efe3bf44637b0e59468a"} Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.207948 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.387586 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jkkk2"] Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.389157 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.392501 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.392727 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.401432 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jkkk2"] Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.447515 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.448066 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.448190 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zvr\" (UniqueName: \"kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.448390 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.551055 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.551171 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.551225 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zvr\" (UniqueName: \"kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.551987 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.559685 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.560658 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.567462 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zvr\" (UniqueName: \"kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.576517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jkkk2\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.721399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:36 crc kubenswrapper[4754]: I0105 20:31:36.836818 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:37 crc kubenswrapper[4754]: I0105 20:31:37.202234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerStarted","Data":"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4"} Jan 05 20:31:37 crc kubenswrapper[4754]: I0105 20:31:37.202629 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerStarted","Data":"0a6cdba4d12d75b4d54482ef634b9ea9803aaa8b56f4d6a609a762fc928849ef"} Jan 05 20:31:37 crc kubenswrapper[4754]: I0105 20:31:37.203985 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerStarted","Data":"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3"} Jan 05 20:31:37 crc kubenswrapper[4754]: I0105 20:31:37.260535 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jkkk2"] Jan 05 20:31:37 crc kubenswrapper[4754]: W0105 20:31:37.269331 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486e225f_fd48_4c4d_a277_62ae1886a9f5.slice/crio-7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16 WatchSource:0}: Error finding container 7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16: Status 404 returned error can't find the container with id 7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16 Jan 05 20:31:37 crc kubenswrapper[4754]: I0105 20:31:37.603864 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1753ceb9-64e3-4fb9-b33c-d39212c708a2" path="/var/lib/kubelet/pods/1753ceb9-64e3-4fb9-b33c-d39212c708a2/volumes" Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.220229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jkkk2" event={"ID":"486e225f-fd48-4c4d-a277-62ae1886a9f5","Type":"ContainerStarted","Data":"e821d89ef64b2a974f742d28d654296a0cf584d3e321b721ea1f772a216b8d41"} Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.220526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jkkk2" event={"ID":"486e225f-fd48-4c4d-a277-62ae1886a9f5","Type":"ContainerStarted","Data":"7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16"} Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.221927 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerStarted","Data":"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996"} Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.225120 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerStarted","Data":"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3"} Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.259353 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jkkk2" podStartSLOduration=2.25932978 podStartE2EDuration="2.25932978s" podCreationTimestamp="2026-01-05 20:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:38.244494 +0000 UTC m=+1584.953677874" watchObservedRunningTime="2026-01-05 20:31:38.25932978 +0000 UTC m=+1584.968513664" Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.277196 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.277176808 podStartE2EDuration="3.277176808s" podCreationTimestamp="2026-01-05 20:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:38.272542927 +0000 UTC m=+1584.981726801" watchObservedRunningTime="2026-01-05 20:31:38.277176808 +0000 UTC m=+1584.986360682" Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.578917 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.678469 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:31:38 crc kubenswrapper[4754]: I0105 20:31:38.678797 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-b45w9" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="dnsmasq-dns" containerID="cri-o://4467b60edb38775f310f5123fa19811bbc9bb192c189ac98946277fcabbdf9f7" gracePeriod=10 Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.237280 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerStarted","Data":"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147"} Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.239013 4754 generic.go:334] "Generic (PLEG): container finished" podID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerID="4467b60edb38775f310f5123fa19811bbc9bb192c189ac98946277fcabbdf9f7" exitCode=0 Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.239379 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-b45w9" event={"ID":"fedfab08-bf50-4d7c-8f04-583679e20d59","Type":"ContainerDied","Data":"4467b60edb38775f310f5123fa19811bbc9bb192c189ac98946277fcabbdf9f7"} Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.386045 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469094 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469169 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469195 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8vhs\" (UniqueName: \"kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469260 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469336 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.469451 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc\") pod \"fedfab08-bf50-4d7c-8f04-583679e20d59\" (UID: \"fedfab08-bf50-4d7c-8f04-583679e20d59\") " Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.477525 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs" (OuterVolumeSpecName: "kube-api-access-d8vhs") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "kube-api-access-d8vhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.537200 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.537422 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.537216 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.557876 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573435 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573475 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573491 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573502 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573514 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8vhs\" (UniqueName: \"kubernetes.io/projected/fedfab08-bf50-4d7c-8f04-583679e20d59-kube-api-access-d8vhs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.573904 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config" (OuterVolumeSpecName: "config") pod "fedfab08-bf50-4d7c-8f04-583679e20d59" (UID: "fedfab08-bf50-4d7c-8f04-583679e20d59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:31:39 crc kubenswrapper[4754]: I0105 20:31:39.677226 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedfab08-bf50-4d7c-8f04-583679e20d59-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:40 crc kubenswrapper[4754]: E0105 20:31:40.038507 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.249561 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-b45w9" event={"ID":"fedfab08-bf50-4d7c-8f04-583679e20d59","Type":"ContainerDied","Data":"8a8ced6b45a3c0fc56315204ef06e10f85d3333b67d3f61edc09f443d580d537"} Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.249627 4754 scope.go:117] "RemoveContainer" containerID="4467b60edb38775f310f5123fa19811bbc9bb192c189ac98946277fcabbdf9f7" Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.249781 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-b45w9" Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.280023 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.297048 4754 scope.go:117] "RemoveContainer" containerID="7fdb49ef997c7f28c9f9793307830175867e06ba93d011f513fafd64d08156a1" Jan 05 20:31:40 crc kubenswrapper[4754]: I0105 20:31:40.301940 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-b45w9"] Jan 05 20:31:41 crc kubenswrapper[4754]: I0105 20:31:41.263912 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerStarted","Data":"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b"} Jan 05 20:31:41 crc kubenswrapper[4754]: I0105 20:31:41.264107 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:31:41 crc kubenswrapper[4754]: I0105 20:31:41.286188 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.342323252 podStartE2EDuration="6.286170917s" podCreationTimestamp="2026-01-05 20:31:35 +0000 UTC" firstStartedPulling="2026-01-05 20:31:36.126311833 +0000 UTC m=+1582.835495717" lastFinishedPulling="2026-01-05 20:31:40.070159508 +0000 UTC m=+1586.779343382" observedRunningTime="2026-01-05 20:31:41.283499527 +0000 UTC m=+1587.992683441" watchObservedRunningTime="2026-01-05 20:31:41.286170917 +0000 UTC m=+1587.995354811" Jan 05 20:31:41 crc kubenswrapper[4754]: I0105 20:31:41.592243 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:31:41 crc kubenswrapper[4754]: E0105 20:31:41.592886 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:31:41 crc kubenswrapper[4754]: I0105 20:31:41.611050 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" path="/var/lib/kubelet/pods/fedfab08-bf50-4d7c-8f04-583679e20d59/volumes" Jan 05 20:31:43 crc kubenswrapper[4754]: I0105 20:31:43.296405 4754 generic.go:334] "Generic (PLEG): container finished" podID="486e225f-fd48-4c4d-a277-62ae1886a9f5" containerID="e821d89ef64b2a974f742d28d654296a0cf584d3e321b721ea1f772a216b8d41" exitCode=0 Jan 05 20:31:43 crc kubenswrapper[4754]: I0105 20:31:43.296473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jkkk2" event={"ID":"486e225f-fd48-4c4d-a277-62ae1886a9f5","Type":"ContainerDied","Data":"e821d89ef64b2a974f742d28d654296a0cf584d3e321b721ea1f772a216b8d41"} Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.823382 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.917735 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zvr\" (UniqueName: \"kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr\") pod \"486e225f-fd48-4c4d-a277-62ae1886a9f5\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.917825 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle\") pod \"486e225f-fd48-4c4d-a277-62ae1886a9f5\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.918072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts\") pod \"486e225f-fd48-4c4d-a277-62ae1886a9f5\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.918111 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data\") pod \"486e225f-fd48-4c4d-a277-62ae1886a9f5\" (UID: \"486e225f-fd48-4c4d-a277-62ae1886a9f5\") " Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.924238 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr" (OuterVolumeSpecName: "kube-api-access-c4zvr") pod "486e225f-fd48-4c4d-a277-62ae1886a9f5" (UID: "486e225f-fd48-4c4d-a277-62ae1886a9f5"). InnerVolumeSpecName "kube-api-access-c4zvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.924555 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts" (OuterVolumeSpecName: "scripts") pod "486e225f-fd48-4c4d-a277-62ae1886a9f5" (UID: "486e225f-fd48-4c4d-a277-62ae1886a9f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.951154 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data" (OuterVolumeSpecName: "config-data") pod "486e225f-fd48-4c4d-a277-62ae1886a9f5" (UID: "486e225f-fd48-4c4d-a277-62ae1886a9f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:44 crc kubenswrapper[4754]: I0105 20:31:44.957884 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "486e225f-fd48-4c4d-a277-62ae1886a9f5" (UID: "486e225f-fd48-4c4d-a277-62ae1886a9f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.020888 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.020947 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.020968 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486e225f-fd48-4c4d-a277-62ae1886a9f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.020987 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4zvr\" (UniqueName: \"kubernetes.io/projected/486e225f-fd48-4c4d-a277-62ae1886a9f5-kube-api-access-c4zvr\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.326072 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jkkk2" event={"ID":"486e225f-fd48-4c4d-a277-62ae1886a9f5","Type":"ContainerDied","Data":"7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16"} Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.326124 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7706edb5b559b85081e18a8425bd1e75169ed8ac307d51a7303ed6474f1d1d16" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.326165 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jkkk2" Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.521733 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.522135 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-log" containerID="cri-o://6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" gracePeriod=30 Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.522197 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-api" containerID="cri-o://99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" gracePeriod=30 Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.534656 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.534846 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" containerName="nova-scheduler-scheduler" containerID="cri-o://fb57b23d3e04615db9718a37d743e238b46483f8db03070ba10b7c9deba339f7" gracePeriod=30 Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.619897 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.620956 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" containerID="cri-o://a97ed84d51f82d64512ed7ceb1f4ba52e6804bd89f4f73f7e98edfe95d07939b" gracePeriod=30 Jan 05 20:31:45 crc kubenswrapper[4754]: I0105 20:31:45.620275 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" containerID="cri-o://8b8bd501fac7896c908f66daa74bb70d6c9415d9d095164c7ed8a11a0a69e856" gracePeriod=30 Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.226340 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.253769 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv26x\" (UniqueName: \"kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.253807 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.253936 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.254000 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.254034 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.254244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle\") pod \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\" (UID: \"a6edb563-03e6-4b19-bf71-bb4ee3250e2b\") " Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.254594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs" (OuterVolumeSpecName: "logs") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.255527 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.266884 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x" (OuterVolumeSpecName: "kube-api-access-vv26x") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "kube-api-access-vv26x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.300790 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data" (OuterVolumeSpecName: "config-data") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.324587 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.332950 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342549 4754 generic.go:334] "Generic (PLEG): container finished" podID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerID="99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" exitCode=0 Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342580 4754 generic.go:334] "Generic (PLEG): container finished" podID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerID="6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" exitCode=143 Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342622 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerDied","Data":"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3"} Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342650 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerDied","Data":"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4"} Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6edb563-03e6-4b19-bf71-bb4ee3250e2b","Type":"ContainerDied","Data":"0a6cdba4d12d75b4d54482ef634b9ea9803aaa8b56f4d6a609a762fc928849ef"} Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342677 4754 scope.go:117] "RemoveContainer" containerID="99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.342804 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.345692 4754 generic.go:334] "Generic (PLEG): container finished" podID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerID="8b8bd501fac7896c908f66daa74bb70d6c9415d9d095164c7ed8a11a0a69e856" exitCode=143 Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.345724 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerDied","Data":"8b8bd501fac7896c908f66daa74bb70d6c9415d9d095164c7ed8a11a0a69e856"} Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.347980 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6edb563-03e6-4b19-bf71-bb4ee3250e2b" (UID: "a6edb563-03e6-4b19-bf71-bb4ee3250e2b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.357721 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.357749 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.357760 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.357769 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv26x\" (UniqueName: \"kubernetes.io/projected/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-kube-api-access-vv26x\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.357807 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6edb563-03e6-4b19-bf71-bb4ee3250e2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.404514 4754 scope.go:117] "RemoveContainer" containerID="6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.435491 4754 scope.go:117] "RemoveContainer" containerID="99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.436103 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3\": container with ID starting with 99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3 not found: ID does not exist" containerID="99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.436142 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3"} err="failed to get container status \"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3\": rpc error: code = NotFound desc = could not find container \"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3\": container with ID starting with 99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3 not found: ID does not exist" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.436171 4754 scope.go:117] "RemoveContainer" containerID="6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.436747 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4\": container with ID starting with 6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4 not found: ID does not exist" containerID="6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.436778 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4"} err="failed to get container status \"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4\": rpc error: code = NotFound desc = could not find container \"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4\": container with ID starting with 6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4 not found: ID does not exist" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.436796 4754 scope.go:117] "RemoveContainer" containerID="99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.437272 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3"} err="failed to get container status \"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3\": rpc error: code = NotFound desc = could not find container \"99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3\": container with ID starting with 99012898d97e089ac0012c7d6a6bc0d44fd8336399de382892a0047f9b6494a3 not found: ID does not exist" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.437409 4754 scope.go:117] "RemoveContainer" containerID="6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.438069 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4"} err="failed to get container status \"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4\": rpc error: code = NotFound desc = could not find container \"6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4\": container with ID starting with 6d2981ee7beee94ee9acfb47cb4940974b4370e40f6caa1ac0f599c9ff9630f4 not found: ID does not exist" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.681709 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.708495 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.724854 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.725534 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-api" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725559 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-api" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.725616 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-log" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725628 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-log" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.725661 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486e225f-fd48-4c4d-a277-62ae1886a9f5" containerName="nova-manage" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725669 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="486e225f-fd48-4c4d-a277-62ae1886a9f5" containerName="nova-manage" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.725683 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="init" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725691 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="init" Jan 05 20:31:46 crc kubenswrapper[4754]: E0105 20:31:46.725708 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="dnsmasq-dns" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725716 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="dnsmasq-dns" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.725997 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="486e225f-fd48-4c4d-a277-62ae1886a9f5" containerName="nova-manage" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.726029 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-api" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.726060 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fedfab08-bf50-4d7c-8f04-583679e20d59" containerName="dnsmasq-dns" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.726076 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" containerName="nova-api-log" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.728408 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.733678 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.733897 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.734232 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.764355 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.767492 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82b1f54b-5485-44ad-9d23-ba1243ea1281-logs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.767585 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqlk\" (UniqueName: \"kubernetes.io/projected/82b1f54b-5485-44ad-9d23-ba1243ea1281-kube-api-access-dqqlk\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.767668 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.767761 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-public-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.767942 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-config-data\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.768117 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870023 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870161 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82b1f54b-5485-44ad-9d23-ba1243ea1281-logs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870201 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqlk\" (UniqueName: \"kubernetes.io/projected/82b1f54b-5485-44ad-9d23-ba1243ea1281-kube-api-access-dqqlk\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870240 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870284 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-public-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.870357 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-config-data\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.871964 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82b1f54b-5485-44ad-9d23-ba1243ea1281-logs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.875782 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.875989 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-public-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.877257 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-config-data\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.879950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b1f54b-5485-44ad-9d23-ba1243ea1281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:46 crc kubenswrapper[4754]: I0105 20:31:46.888517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqlk\" (UniqueName: \"kubernetes.io/projected/82b1f54b-5485-44ad-9d23-ba1243ea1281-kube-api-access-dqqlk\") pod \"nova-api-0\" (UID: \"82b1f54b-5485-44ad-9d23-ba1243ea1281\") " pod="openstack/nova-api-0" Jan 05 20:31:47 crc kubenswrapper[4754]: I0105 20:31:47.050223 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 20:31:47 crc kubenswrapper[4754]: W0105 20:31:47.572007 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82b1f54b_5485_44ad_9d23_ba1243ea1281.slice/crio-bd1eff43bef5b3f7ba469edb0ad3b67391476cfa69ff5548fdfd5288bbd648d5 WatchSource:0}: Error finding container bd1eff43bef5b3f7ba469edb0ad3b67391476cfa69ff5548fdfd5288bbd648d5: Status 404 returned error can't find the container with id bd1eff43bef5b3f7ba469edb0ad3b67391476cfa69ff5548fdfd5288bbd648d5 Jan 05 20:31:47 crc kubenswrapper[4754]: I0105 20:31:47.577026 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 20:31:47 crc kubenswrapper[4754]: I0105 20:31:47.614381 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6edb563-03e6-4b19-bf71-bb4ee3250e2b" path="/var/lib/kubelet/pods/a6edb563-03e6-4b19-bf71-bb4ee3250e2b/volumes" Jan 05 20:31:48 crc kubenswrapper[4754]: E0105 20:31:48.259411 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:48 crc kubenswrapper[4754]: E0105 20:31:48.259589 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:48 crc kubenswrapper[4754]: I0105 20:31:48.377627 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82b1f54b-5485-44ad-9d23-ba1243ea1281","Type":"ContainerStarted","Data":"3ce24f11a4d6797da6943142a6448833fec4005dd1cba165e9803726633bd17a"} Jan 05 20:31:48 crc kubenswrapper[4754]: I0105 20:31:48.377673 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82b1f54b-5485-44ad-9d23-ba1243ea1281","Type":"ContainerStarted","Data":"5a42988379235ff8467cce81b1132a05294b111c4b61de5663e62df8f888fbdc"} Jan 05 20:31:48 crc kubenswrapper[4754]: I0105 20:31:48.377683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82b1f54b-5485-44ad-9d23-ba1243ea1281","Type":"ContainerStarted","Data":"bd1eff43bef5b3f7ba469edb0ad3b67391476cfa69ff5548fdfd5288bbd648d5"} Jan 05 20:31:48 crc kubenswrapper[4754]: I0105 20:31:48.411274 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.411254644 podStartE2EDuration="2.411254644s" podCreationTimestamp="2026-01-05 20:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:48.395996883 +0000 UTC m=+1595.105180777" watchObservedRunningTime="2026-01-05 20:31:48.411254644 +0000 UTC m=+1595.120438518" Jan 05 20:31:48 crc kubenswrapper[4754]: E0105 20:31:48.628784 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.028575 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:50794->10.217.0.250:8775: read: connection reset by peer" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.028585 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:50786->10.217.0.250:8775: read: connection reset by peer" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.389889 4754 generic.go:334] "Generic (PLEG): container finished" podID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" containerID="fb57b23d3e04615db9718a37d743e238b46483f8db03070ba10b7c9deba339f7" exitCode=0 Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.390153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e603b43-937b-4fb6-8b7b-e4f93e03b069","Type":"ContainerDied","Data":"fb57b23d3e04615db9718a37d743e238b46483f8db03070ba10b7c9deba339f7"} Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.392739 4754 generic.go:334] "Generic (PLEG): container finished" podID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerID="a97ed84d51f82d64512ed7ceb1f4ba52e6804bd89f4f73f7e98edfe95d07939b" exitCode=0 Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.392825 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerDied","Data":"a97ed84d51f82d64512ed7ceb1f4ba52e6804bd89f4f73f7e98edfe95d07939b"} Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.806576 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.813533 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843012 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8j7w\" (UniqueName: \"kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w\") pod \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843402 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wdmw\" (UniqueName: \"kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw\") pod \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843480 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data\") pod \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843528 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle\") pod \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\" (UID: \"7e603b43-937b-4fb6-8b7b-e4f93e03b069\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843597 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle\") pod \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843779 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs\") pod \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843854 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data\") pod \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.843919 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs\") pod \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\" (UID: \"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f\") " Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.845492 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs" (OuterVolumeSpecName: "logs") pod "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" (UID: "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.875012 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w" (OuterVolumeSpecName: "kube-api-access-q8j7w") pod "7e603b43-937b-4fb6-8b7b-e4f93e03b069" (UID: "7e603b43-937b-4fb6-8b7b-e4f93e03b069"). InnerVolumeSpecName "kube-api-access-q8j7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.876939 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw" (OuterVolumeSpecName: "kube-api-access-7wdmw") pod "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" (UID: "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f"). InnerVolumeSpecName "kube-api-access-7wdmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.914881 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data" (OuterVolumeSpecName: "config-data") pod "7e603b43-937b-4fb6-8b7b-e4f93e03b069" (UID: "7e603b43-937b-4fb6-8b7b-e4f93e03b069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.935063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e603b43-937b-4fb6-8b7b-e4f93e03b069" (UID: "7e603b43-937b-4fb6-8b7b-e4f93e03b069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.949543 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wdmw\" (UniqueName: \"kubernetes.io/projected/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-kube-api-access-7wdmw\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.949590 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.949604 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e603b43-937b-4fb6-8b7b-e4f93e03b069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.949616 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-logs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.949630 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8j7w\" (UniqueName: \"kubernetes.io/projected/7e603b43-937b-4fb6-8b7b-e4f93e03b069-kube-api-access-q8j7w\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.950911 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" (UID: "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.953861 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data" (OuterVolumeSpecName: "config-data") pod "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" (UID: "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:49 crc kubenswrapper[4754]: I0105 20:31:49.984392 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" (UID: "261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.051224 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.051265 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.051279 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:50 crc kubenswrapper[4754]: E0105 20:31:50.081169 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache]" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.416004 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e603b43-937b-4fb6-8b7b-e4f93e03b069","Type":"ContainerDied","Data":"f61b57a8c0f6c410272fed6b7ecc0cea8eb1355da16202ad4d7a215ac660ea63"} Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.416440 4754 scope.go:117] "RemoveContainer" containerID="fb57b23d3e04615db9718a37d743e238b46483f8db03070ba10b7c9deba339f7" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.416081 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.420392 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.420266 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f","Type":"ContainerDied","Data":"bfc0ef6271cf89851d8d9071fc98d40f1c710ede1325a98892b81ada0b220cf1"} Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.464693 4754 scope.go:117] "RemoveContainer" containerID="a97ed84d51f82d64512ed7ceb1f4ba52e6804bd89f4f73f7e98edfe95d07939b" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.478727 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.488927 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.503381 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.514628 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: E0105 20:31:50.515188 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515205 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" Jan 05 20:31:50 crc kubenswrapper[4754]: E0105 20:31:50.515234 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515240 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" Jan 05 20:31:50 crc kubenswrapper[4754]: E0105 20:31:50.515251 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" containerName="nova-scheduler-scheduler" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515257 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" containerName="nova-scheduler-scheduler" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515552 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-log" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515577 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" containerName="nova-scheduler-scheduler" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.515593 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" containerName="nova-metadata-metadata" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.516411 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.524535 4754 scope.go:117] "RemoveContainer" containerID="8b8bd501fac7896c908f66daa74bb70d6c9415d9d095164c7ed8a11a0a69e856" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.525094 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.532051 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.552168 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.563023 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.565219 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.567676 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-config-data\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.567927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.568214 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg95c\" (UniqueName: \"kubernetes.io/projected/b342b4dd-f705-4e66-a97c-231b627cb420-kube-api-access-sg95c\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.572150 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.572406 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.583221 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.671099 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg95c\" (UniqueName: \"kubernetes.io/projected/b342b4dd-f705-4e66-a97c-231b627cb420-kube-api-access-sg95c\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.671441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-config-data\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.671577 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.675567 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.685719 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b342b4dd-f705-4e66-a97c-231b627cb420-config-data\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.690435 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg95c\" (UniqueName: \"kubernetes.io/projected/b342b4dd-f705-4e66-a97c-231b627cb420-kube-api-access-sg95c\") pod \"nova-scheduler-0\" (UID: \"b342b4dd-f705-4e66-a97c-231b627cb420\") " pod="openstack/nova-scheduler-0" Jan 05 20:31:50 crc kubenswrapper[4754]: I0105 20:31:50.850901 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.083895 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gkm\" (UniqueName: \"kubernetes.io/projected/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-kube-api-access-p7gkm\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.084490 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-config-data\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.084537 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-logs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.085155 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.085492 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.192948 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.193396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gkm\" (UniqueName: \"kubernetes.io/projected/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-kube-api-access-p7gkm\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.193694 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-config-data\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.193815 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-logs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.194366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.197668 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-logs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.203527 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-config-data\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.203795 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.216047 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.241191 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gkm\" (UniqueName: \"kubernetes.io/projected/9ea36e99-8156-4df4-ac09-0e12ab02f0a4-kube-api-access-p7gkm\") pod \"nova-metadata-0\" (UID: \"9ea36e99-8156-4df4-ac09-0e12ab02f0a4\") " pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.474732 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.487421 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 20:31:51 crc kubenswrapper[4754]: W0105 20:31:51.488938 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb342b4dd_f705_4e66_a97c_231b627cb420.slice/crio-bec653d3b101cfaf33fb258d3fd06fa9e82ee8fbf0f4561ac7c0c8f7909f0101 WatchSource:0}: Error finding container bec653d3b101cfaf33fb258d3fd06fa9e82ee8fbf0f4561ac7c0c8f7909f0101: Status 404 returned error can't find the container with id bec653d3b101cfaf33fb258d3fd06fa9e82ee8fbf0f4561ac7c0c8f7909f0101 Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.605896 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f" path="/var/lib/kubelet/pods/261f8b94-b24b-4bf6-9c91-fdd2e85d1f2f/volumes" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.606908 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e603b43-937b-4fb6-8b7b-e4f93e03b069" path="/var/lib/kubelet/pods/7e603b43-937b-4fb6-8b7b-e4f93e03b069/volumes" Jan 05 20:31:51 crc kubenswrapper[4754]: I0105 20:31:51.953533 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 20:31:51 crc kubenswrapper[4754]: W0105 20:31:51.961541 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ea36e99_8156_4df4_ac09_0e12ab02f0a4.slice/crio-437228bc95115f5992e8433b287bd793816dd472cd2357c6f3edf419eaa29a38 WatchSource:0}: Error finding container 437228bc95115f5992e8433b287bd793816dd472cd2357c6f3edf419eaa29a38: Status 404 returned error can't find the container with id 437228bc95115f5992e8433b287bd793816dd472cd2357c6f3edf419eaa29a38 Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.453287 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ea36e99-8156-4df4-ac09-0e12ab02f0a4","Type":"ContainerStarted","Data":"3d18ece3e4b9408db5677ddf10a83a092869b047b4479478bbfb8f88c330a625"} Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.454774 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ea36e99-8156-4df4-ac09-0e12ab02f0a4","Type":"ContainerStarted","Data":"878e050103222185bc89311483b200c8d990c80ba44b5a7f991c6358e54246a0"} Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.454792 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ea36e99-8156-4df4-ac09-0e12ab02f0a4","Type":"ContainerStarted","Data":"437228bc95115f5992e8433b287bd793816dd472cd2357c6f3edf419eaa29a38"} Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.456538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b342b4dd-f705-4e66-a97c-231b627cb420","Type":"ContainerStarted","Data":"844901c35de226db1f299674c80d948dbcb08844de59722fd09242cf1f0e3869"} Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.456570 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b342b4dd-f705-4e66-a97c-231b627cb420","Type":"ContainerStarted","Data":"bec653d3b101cfaf33fb258d3fd06fa9e82ee8fbf0f4561ac7c0c8f7909f0101"} Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.486850 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.48682737 podStartE2EDuration="2.48682737s" podCreationTimestamp="2026-01-05 20:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:52.479079757 +0000 UTC m=+1599.188263641" watchObservedRunningTime="2026-01-05 20:31:52.48682737 +0000 UTC m=+1599.196011254" Jan 05 20:31:52 crc kubenswrapper[4754]: I0105 20:31:52.523957 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.523934414 podStartE2EDuration="2.523934414s" podCreationTimestamp="2026-01-05 20:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:31:52.506505017 +0000 UTC m=+1599.215688901" watchObservedRunningTime="2026-01-05 20:31:52.523934414 +0000 UTC m=+1599.233118298" Jan 05 20:31:53 crc kubenswrapper[4754]: I0105 20:31:53.609995 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:31:53 crc kubenswrapper[4754]: E0105 20:31:53.610257 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:31:55 crc kubenswrapper[4754]: I0105 20:31:55.851232 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 20:31:56 crc kubenswrapper[4754]: I0105 20:31:56.499845 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:56 crc kubenswrapper[4754]: I0105 20:31:56.500218 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.051715 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.052036 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.532609 4754 generic.go:334] "Generic (PLEG): container finished" podID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerID="36ebddc927110f121c1bfe084736ed7990d55bc8423b5493ddc688a03333ea56" exitCode=137 Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.532720 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerDied","Data":"36ebddc927110f121c1bfe084736ed7990d55bc8423b5493ddc688a03333ea56"} Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.532706 4754 generic.go:334] "Generic (PLEG): container finished" podID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerID="09d9d571041847d3276c3bc0fcc994ac00235350edef6c0aa418786fe6ef047b" exitCode=137 Jan 05 20:31:57 crc kubenswrapper[4754]: I0105 20:31:57.532775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerDied","Data":"09d9d571041847d3276c3bc0fcc994ac00235350edef6c0aa418786fe6ef047b"} Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.069530 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82b1f54b-5485-44ad-9d23-ba1243ea1281" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.069548 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82b1f54b-5485-44ad-9d23-ba1243ea1281" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.125076 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.231634 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data\") pod \"312545aa-f35e-4c53-89e7-38ecc1d4827f\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.231824 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grzbc\" (UniqueName: \"kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc\") pod \"312545aa-f35e-4c53-89e7-38ecc1d4827f\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.231852 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts\") pod \"312545aa-f35e-4c53-89e7-38ecc1d4827f\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.232007 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle\") pod \"312545aa-f35e-4c53-89e7-38ecc1d4827f\" (UID: \"312545aa-f35e-4c53-89e7-38ecc1d4827f\") " Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.238397 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc" (OuterVolumeSpecName: "kube-api-access-grzbc") pod "312545aa-f35e-4c53-89e7-38ecc1d4827f" (UID: "312545aa-f35e-4c53-89e7-38ecc1d4827f"). InnerVolumeSpecName "kube-api-access-grzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.240416 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts" (OuterVolumeSpecName: "scripts") pod "312545aa-f35e-4c53-89e7-38ecc1d4827f" (UID: "312545aa-f35e-4c53-89e7-38ecc1d4827f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.335690 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grzbc\" (UniqueName: \"kubernetes.io/projected/312545aa-f35e-4c53-89e7-38ecc1d4827f-kube-api-access-grzbc\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.335718 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.368249 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "312545aa-f35e-4c53-89e7-38ecc1d4827f" (UID: "312545aa-f35e-4c53-89e7-38ecc1d4827f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.372409 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data" (OuterVolumeSpecName: "config-data") pod "312545aa-f35e-4c53-89e7-38ecc1d4827f" (UID: "312545aa-f35e-4c53-89e7-38ecc1d4827f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.438376 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.438413 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312545aa-f35e-4c53-89e7-38ecc1d4827f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.548669 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312545aa-f35e-4c53-89e7-38ecc1d4827f","Type":"ContainerDied","Data":"13b154bfffbdcc998d21905ecc9f94338bf28aa9477736ca26e96fb960dfab19"} Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.548754 4754 scope.go:117] "RemoveContainer" containerID="36ebddc927110f121c1bfe084736ed7990d55bc8423b5493ddc688a03333ea56" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.548752 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.586719 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.593038 4754 scope.go:117] "RemoveContainer" containerID="09d9d571041847d3276c3bc0fcc994ac00235350edef6c0aa418786fe6ef047b" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.598380 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.618906 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:58 crc kubenswrapper[4754]: E0105 20:31:58.619445 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-evaluator" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619462 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-evaluator" Jan 05 20:31:58 crc kubenswrapper[4754]: E0105 20:31:58.619469 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-listener" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619475 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-listener" Jan 05 20:31:58 crc kubenswrapper[4754]: E0105 20:31:58.619492 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-api" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619498 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-api" Jan 05 20:31:58 crc kubenswrapper[4754]: E0105 20:31:58.619532 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-notifier" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619538 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-notifier" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619846 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-api" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619862 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-notifier" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619900 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-evaluator" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.619923 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" containerName="aodh-listener" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.622818 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.629778 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.629878 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.630008 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq9pz" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.630748 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.630872 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.660984 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.662408 4754 scope.go:117] "RemoveContainer" containerID="51ff30be97a60df4f3bc888a61938ef26c7a5c98a3b1c62a880d7a71634afd3e" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.696279 4754 scope.go:117] "RemoveContainer" containerID="8fa3d8567631906e1958b040d2b158b3408cd04c47864372f36649e48dd49e3e" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.749761 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.750860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qlq\" (UniqueName: \"kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.750914 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.751127 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.751204 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.751254 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855811 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qlq\" (UniqueName: \"kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855834 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855898 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855929 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.855948 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.860585 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.861136 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.861867 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.865222 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.868405 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.878820 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qlq\" (UniqueName: \"kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq\") pod \"aodh-0\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " pod="openstack/aodh-0" Jan 05 20:31:58 crc kubenswrapper[4754]: I0105 20:31:58.959038 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:31:59 crc kubenswrapper[4754]: W0105 20:31:59.499567 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc52a8d_ecb8_4b19_8275_b739d23f7c43.slice/crio-2408eefbe1c039863d9d322fe18d70cdeca0569bcbd35af944c8bd27588b4182 WatchSource:0}: Error finding container 2408eefbe1c039863d9d322fe18d70cdeca0569bcbd35af944c8bd27588b4182: Status 404 returned error can't find the container with id 2408eefbe1c039863d9d322fe18d70cdeca0569bcbd35af944c8bd27588b4182 Jan 05 20:31:59 crc kubenswrapper[4754]: I0105 20:31:59.501378 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:31:59 crc kubenswrapper[4754]: I0105 20:31:59.560686 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerStarted","Data":"2408eefbe1c039863d9d322fe18d70cdeca0569bcbd35af944c8bd27588b4182"} Jan 05 20:31:59 crc kubenswrapper[4754]: I0105 20:31:59.600747 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312545aa-f35e-4c53-89e7-38ecc1d4827f" path="/var/lib/kubelet/pods/312545aa-f35e-4c53-89e7-38ecc1d4827f/volumes" Jan 05 20:32:00 crc kubenswrapper[4754]: E0105 20:32:00.450684 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache]" Jan 05 20:32:00 crc kubenswrapper[4754]: I0105 20:32:00.851171 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 20:32:00 crc kubenswrapper[4754]: I0105 20:32:00.907117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 20:32:01 crc kubenswrapper[4754]: I0105 20:32:01.512894 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 20:32:01 crc kubenswrapper[4754]: I0105 20:32:01.513220 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 20:32:01 crc kubenswrapper[4754]: I0105 20:32:01.604233 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerStarted","Data":"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9"} Jan 05 20:32:01 crc kubenswrapper[4754]: I0105 20:32:01.635623 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 20:32:02 crc kubenswrapper[4754]: I0105 20:32:02.536271 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9ea36e99-8156-4df4-ac09-0e12ab02f0a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 20:32:02 crc kubenswrapper[4754]: I0105 20:32:02.536327 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9ea36e99-8156-4df4-ac09-0e12ab02f0a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 20:32:02 crc kubenswrapper[4754]: I0105 20:32:02.624391 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerStarted","Data":"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7"} Jan 05 20:32:03 crc kubenswrapper[4754]: E0105 20:32:03.633512 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache]" Jan 05 20:32:03 crc kubenswrapper[4754]: I0105 20:32:03.646055 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerStarted","Data":"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc"} Jan 05 20:32:03 crc kubenswrapper[4754]: I0105 20:32:03.646099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerStarted","Data":"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f"} Jan 05 20:32:03 crc kubenswrapper[4754]: I0105 20:32:03.698947 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.017176388 podStartE2EDuration="5.698918571s" podCreationTimestamp="2026-01-05 20:31:58 +0000 UTC" firstStartedPulling="2026-01-05 20:31:59.50236656 +0000 UTC m=+1606.211550434" lastFinishedPulling="2026-01-05 20:32:03.184108743 +0000 UTC m=+1609.893292617" observedRunningTime="2026-01-05 20:32:03.671170203 +0000 UTC m=+1610.380354097" watchObservedRunningTime="2026-01-05 20:32:03.698918571 +0000 UTC m=+1610.408102455" Jan 05 20:32:05 crc kubenswrapper[4754]: I0105 20:32:05.617950 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.064235 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.066328 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.067217 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.078167 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.589576 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:32:07 crc kubenswrapper[4754]: E0105 20:32:07.590280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.717004 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 20:32:07 crc kubenswrapper[4754]: I0105 20:32:07.733355 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.357799 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.360713 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b658733a-9c58-4287-82a3-b49d21e53e53" containerName="kube-state-metrics" containerID="cri-o://e305d62840e474ad18fcd232bf421ab1fc0fb1ecb254e282bf36129331994891" gracePeriod=30 Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.434642 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.434853 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="3eb89248-973b-47ce-b504-b9523a605a4a" containerName="mysqld-exporter" containerID="cri-o://91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862" gracePeriod=30 Jan 05 20:32:10 crc kubenswrapper[4754]: E0105 20:32:10.779111 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb89248_973b_47ce_b504_b9523a605a4a.slice/crio-conmon-91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cce9715_2736_4de9_978c_355b04462cc6.slice/crio-e6642792859985d8c56d377f3c920ca0c662f56bf30c45d48e24d890bd975121\": RecentStats: unable to find data in memory cache]" Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.791930 4754 generic.go:334] "Generic (PLEG): container finished" podID="b658733a-9c58-4287-82a3-b49d21e53e53" containerID="e305d62840e474ad18fcd232bf421ab1fc0fb1ecb254e282bf36129331994891" exitCode=2 Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.792016 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b658733a-9c58-4287-82a3-b49d21e53e53","Type":"ContainerDied","Data":"e305d62840e474ad18fcd232bf421ab1fc0fb1ecb254e282bf36129331994891"} Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.795095 4754 generic.go:334] "Generic (PLEG): container finished" podID="3eb89248-973b-47ce-b504-b9523a605a4a" containerID="91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862" exitCode=2 Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.795126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3eb89248-973b-47ce-b504-b9523a605a4a","Type":"ContainerDied","Data":"91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862"} Jan 05 20:32:10 crc kubenswrapper[4754]: I0105 20:32:10.921826 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.024640 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwqz\" (UniqueName: \"kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz\") pod \"b658733a-9c58-4287-82a3-b49d21e53e53\" (UID: \"b658733a-9c58-4287-82a3-b49d21e53e53\") " Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.030476 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz" (OuterVolumeSpecName: "kube-api-access-lwwqz") pod "b658733a-9c58-4287-82a3-b49d21e53e53" (UID: "b658733a-9c58-4287-82a3-b49d21e53e53"). InnerVolumeSpecName "kube-api-access-lwwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.064071 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.128036 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwwqz\" (UniqueName: \"kubernetes.io/projected/b658733a-9c58-4287-82a3-b49d21e53e53-kube-api-access-lwwqz\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.229523 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle\") pod \"3eb89248-973b-47ce-b504-b9523a605a4a\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.229633 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data\") pod \"3eb89248-973b-47ce-b504-b9523a605a4a\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.229723 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlsc\" (UniqueName: \"kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc\") pod \"3eb89248-973b-47ce-b504-b9523a605a4a\" (UID: \"3eb89248-973b-47ce-b504-b9523a605a4a\") " Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.233215 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc" (OuterVolumeSpecName: "kube-api-access-8rlsc") pod "3eb89248-973b-47ce-b504-b9523a605a4a" (UID: "3eb89248-973b-47ce-b504-b9523a605a4a"). InnerVolumeSpecName "kube-api-access-8rlsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.267240 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb89248-973b-47ce-b504-b9523a605a4a" (UID: "3eb89248-973b-47ce-b504-b9523a605a4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.326926 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data" (OuterVolumeSpecName: "config-data") pod "3eb89248-973b-47ce-b504-b9523a605a4a" (UID: "3eb89248-973b-47ce-b504-b9523a605a4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.333105 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.333144 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb89248-973b-47ce-b504-b9523a605a4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.333155 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlsc\" (UniqueName: \"kubernetes.io/projected/3eb89248-973b-47ce-b504-b9523a605a4a-kube-api-access-8rlsc\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.495523 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.501866 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.502555 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.807087 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b658733a-9c58-4287-82a3-b49d21e53e53","Type":"ContainerDied","Data":"db47e05fda33d52656ff10879ce95d4215ab265d0395830c2ecae1bf270af35a"} Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.807176 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.807204 4754 scope.go:117] "RemoveContainer" containerID="e305d62840e474ad18fcd232bf421ab1fc0fb1ecb254e282bf36129331994891" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.809092 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3eb89248-973b-47ce-b504-b9523a605a4a","Type":"ContainerDied","Data":"f6d9d5aebb205e1eee0d1bde7cd84984ac45d958032888a1f89f8d055de56233"} Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.809141 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.818690 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.837881 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.838070 4754 scope.go:117] "RemoveContainer" containerID="91ff8bfb81937751315924f656413abd633b9db1c54d278661fbd3a25fcdf862" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.869981 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.886972 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.904603 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.918738 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: E0105 20:32:11.919411 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb89248-973b-47ce-b504-b9523a605a4a" containerName="mysqld-exporter" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.919436 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb89248-973b-47ce-b504-b9523a605a4a" containerName="mysqld-exporter" Jan 05 20:32:11 crc kubenswrapper[4754]: E0105 20:32:11.919454 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b658733a-9c58-4287-82a3-b49d21e53e53" containerName="kube-state-metrics" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.919462 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b658733a-9c58-4287-82a3-b49d21e53e53" containerName="kube-state-metrics" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.919713 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb89248-973b-47ce-b504-b9523a605a4a" containerName="mysqld-exporter" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.919740 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b658733a-9c58-4287-82a3-b49d21e53e53" containerName="kube-state-metrics" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.920793 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.932225 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.932485 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.955557 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.957413 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.962874 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.963045 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 05 20:32:11 crc kubenswrapper[4754]: I0105 20:32:11.970935 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.008429 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053217 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053299 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053358 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-config-data\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053405 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98h5q\" (UniqueName: \"kubernetes.io/projected/ffdd4590-0498-4083-996d-75035d8fba10-kube-api-access-98h5q\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053437 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053453 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053552 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2bh\" (UniqueName: \"kubernetes.io/projected/8ca22987-f7b1-429f-bc45-9eac9f58a85d-kube-api-access-kg2bh\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.053572 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156419 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156445 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-config-data\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156474 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98h5q\" (UniqueName: \"kubernetes.io/projected/ffdd4590-0498-4083-996d-75035d8fba10-kube-api-access-98h5q\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156497 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156515 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156591 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2bh\" (UniqueName: \"kubernetes.io/projected/8ca22987-f7b1-429f-bc45-9eac9f58a85d-kube-api-access-kg2bh\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.156607 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.161230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.162258 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.162420 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffdd4590-0498-4083-996d-75035d8fba10-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.163371 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-config-data\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.164107 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.166141 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca22987-f7b1-429f-bc45-9eac9f58a85d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.180207 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98h5q\" (UniqueName: \"kubernetes.io/projected/ffdd4590-0498-4083-996d-75035d8fba10-kube-api-access-98h5q\") pod \"kube-state-metrics-0\" (UID: \"ffdd4590-0498-4083-996d-75035d8fba10\") " pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.182535 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2bh\" (UniqueName: \"kubernetes.io/projected/8ca22987-f7b1-429f-bc45-9eac9f58a85d-kube-api-access-kg2bh\") pod \"mysqld-exporter-0\" (UID: \"8ca22987-f7b1-429f-bc45-9eac9f58a85d\") " pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.251551 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.292664 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.599472 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.600150 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-central-agent" containerID="cri-o://b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3" gracePeriod=30 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.600278 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="proxy-httpd" containerID="cri-o://e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b" gracePeriod=30 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.600348 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="sg-core" containerID="cri-o://c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147" gracePeriod=30 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.600388 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-notification-agent" containerID="cri-o://8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996" gracePeriod=30 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.769236 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.821758 4754 generic.go:334] "Generic (PLEG): container finished" podID="9914fc23-56e8-4362-afcb-aee0b969c368" containerID="e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b" exitCode=0 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.821808 4754 generic.go:334] "Generic (PLEG): container finished" podID="9914fc23-56e8-4362-afcb-aee0b969c368" containerID="c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147" exitCode=2 Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.821853 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerDied","Data":"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b"} Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.821908 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerDied","Data":"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147"} Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.827099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffdd4590-0498-4083-996d-75035d8fba10","Type":"ContainerStarted","Data":"399fb52f8ce2d5e0f48db9275c8466546413c12760d7bf2055ace86ef22cfe5f"} Jan 05 20:32:12 crc kubenswrapper[4754]: W0105 20:32:12.890638 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ca22987_f7b1_429f_bc45_9eac9f58a85d.slice/crio-979d03c93194d0303c8985e768eb3c58d00186f086ea5a9daaa65f6150efa83d WatchSource:0}: Error finding container 979d03c93194d0303c8985e768eb3c58d00186f086ea5a9daaa65f6150efa83d: Status 404 returned error can't find the container with id 979d03c93194d0303c8985e768eb3c58d00186f086ea5a9daaa65f6150efa83d Jan 05 20:32:12 crc kubenswrapper[4754]: I0105 20:32:12.893527 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.617056 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb89248-973b-47ce-b504-b9523a605a4a" path="/var/lib/kubelet/pods/3eb89248-973b-47ce-b504-b9523a605a4a/volumes" Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.618869 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b658733a-9c58-4287-82a3-b49d21e53e53" path="/var/lib/kubelet/pods/b658733a-9c58-4287-82a3-b49d21e53e53/volumes" Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.840394 4754 generic.go:334] "Generic (PLEG): container finished" podID="9914fc23-56e8-4362-afcb-aee0b969c368" containerID="b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3" exitCode=0 Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.840497 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerDied","Data":"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3"} Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.842760 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ffdd4590-0498-4083-996d-75035d8fba10","Type":"ContainerStarted","Data":"9fa3314a45cd9ef9a95e0b48b20d890ae65d65875b29cc13fe4e6a41c24b41bd"} Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.845004 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.847316 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8ca22987-f7b1-429f-bc45-9eac9f58a85d","Type":"ContainerStarted","Data":"979d03c93194d0303c8985e768eb3c58d00186f086ea5a9daaa65f6150efa83d"} Jan 05 20:32:13 crc kubenswrapper[4754]: I0105 20:32:13.875807 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.498518423 podStartE2EDuration="2.875790909s" podCreationTimestamp="2026-01-05 20:32:11 +0000 UTC" firstStartedPulling="2026-01-05 20:32:12.779403701 +0000 UTC m=+1619.488587575" lastFinishedPulling="2026-01-05 20:32:13.156676187 +0000 UTC m=+1619.865860061" observedRunningTime="2026-01-05 20:32:13.863471046 +0000 UTC m=+1620.572654940" watchObservedRunningTime="2026-01-05 20:32:13.875790909 +0000 UTC m=+1620.584974783" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.712171 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.822423 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.822491 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.822601 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.822826 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.822853 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.823006 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.823130 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.823181 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmpmw\" (UniqueName: \"kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw\") pod \"9914fc23-56e8-4362-afcb-aee0b969c368\" (UID: \"9914fc23-56e8-4362-afcb-aee0b969c368\") " Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.823631 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.824331 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.824358 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9914fc23-56e8-4362-afcb-aee0b969c368-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.829351 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw" (OuterVolumeSpecName: "kube-api-access-zmpmw") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "kube-api-access-zmpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.831005 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts" (OuterVolumeSpecName: "scripts") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.859222 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.868440 4754 generic.go:334] "Generic (PLEG): container finished" podID="9914fc23-56e8-4362-afcb-aee0b969c368" containerID="8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996" exitCode=0 Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.868508 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerDied","Data":"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996"} Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.868545 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9914fc23-56e8-4362-afcb-aee0b969c368","Type":"ContainerDied","Data":"51e32c9053ffd2f8ee839390201c21cbb5b28e0f0c78efe3bf44637b0e59468a"} Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.868563 4754 scope.go:117] "RemoveContainer" containerID="e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.868735 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.874591 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8ca22987-f7b1-429f-bc45-9eac9f58a85d","Type":"ContainerStarted","Data":"4af1250b728703da4a0fcfcaa09d573f10d5f55227540b0237df7bde4efc7505"} Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.925948 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.0797413750000002 podStartE2EDuration="3.925929814s" podCreationTimestamp="2026-01-05 20:32:11 +0000 UTC" firstStartedPulling="2026-01-05 20:32:12.892414248 +0000 UTC m=+1619.601598122" lastFinishedPulling="2026-01-05 20:32:13.738602657 +0000 UTC m=+1620.447786561" observedRunningTime="2026-01-05 20:32:14.905893878 +0000 UTC m=+1621.615077762" watchObservedRunningTime="2026-01-05 20:32:14.925929814 +0000 UTC m=+1621.635113688" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.927072 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.927099 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.927110 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmpmw\" (UniqueName: \"kubernetes.io/projected/9914fc23-56e8-4362-afcb-aee0b969c368-kube-api-access-zmpmw\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.958566 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:14 crc kubenswrapper[4754]: I0105 20:32:14.993834 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data" (OuterVolumeSpecName: "config-data") pod "9914fc23-56e8-4362-afcb-aee0b969c368" (UID: "9914fc23-56e8-4362-afcb-aee0b969c368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.030201 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.030230 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9914fc23-56e8-4362-afcb-aee0b969c368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.085635 4754 scope.go:117] "RemoveContainer" containerID="c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.107814 4754 scope.go:117] "RemoveContainer" containerID="8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.125768 4754 scope.go:117] "RemoveContainer" containerID="b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.149951 4754 scope.go:117] "RemoveContainer" containerID="e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.151729 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b\": container with ID starting with e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b not found: ID does not exist" containerID="e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.151761 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b"} err="failed to get container status \"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b\": rpc error: code = NotFound desc = could not find container \"e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b\": container with ID starting with e48ce5190f17e3c4d72260e1204b69aac27d76ac18f4f73846ef63b4854e048b not found: ID does not exist" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.151783 4754 scope.go:117] "RemoveContainer" containerID="c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.151988 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147\": container with ID starting with c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147 not found: ID does not exist" containerID="c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.152010 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147"} err="failed to get container status \"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147\": rpc error: code = NotFound desc = could not find container \"c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147\": container with ID starting with c6c15d4094f9e41b22895d1152620e6d07f400fd3717b55ccf45309c78218147 not found: ID does not exist" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.152021 4754 scope.go:117] "RemoveContainer" containerID="8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.152162 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996\": container with ID starting with 8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996 not found: ID does not exist" containerID="8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.152186 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996"} err="failed to get container status \"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996\": rpc error: code = NotFound desc = could not find container \"8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996\": container with ID starting with 8d119b89328c6bea6dff40d438308a2aa69a5b9c4bd6f4f74acfefa75b1ad996 not found: ID does not exist" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.152202 4754 scope.go:117] "RemoveContainer" containerID="b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.152526 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3\": container with ID starting with b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3 not found: ID does not exist" containerID="b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.152550 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3"} err="failed to get container status \"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3\": rpc error: code = NotFound desc = could not find container \"b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3\": container with ID starting with b599f552402a1866b8c108a103db0db288cef351ddbd22faabf8f60661e1d9b3 not found: ID does not exist" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.208188 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.221108 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.243018 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.243619 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-notification-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.243648 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-notification-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.243680 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="proxy-httpd" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.243689 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="proxy-httpd" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.243711 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="sg-core" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.243722 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="sg-core" Jan 05 20:32:15 crc kubenswrapper[4754]: E0105 20:32:15.243742 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-central-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.243753 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-central-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.244049 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-central-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.244077 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="ceilometer-notification-agent" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.244090 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="proxy-httpd" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.244109 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" containerName="sg-core" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.246865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.251098 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.255721 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.255913 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.279851 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.335913 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.335988 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336034 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22zs\" (UniqueName: \"kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336516 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336540 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.336869 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.439876 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v22zs\" (UniqueName: \"kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440283 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440428 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440570 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440637 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.440776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.441113 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.446496 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.449098 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.451474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.451754 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.460590 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.474107 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22zs\" (UniqueName: \"kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.480119 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.569350 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:15 crc kubenswrapper[4754]: I0105 20:32:15.614575 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9914fc23-56e8-4362-afcb-aee0b969c368" path="/var/lib/kubelet/pods/9914fc23-56e8-4362-afcb-aee0b969c368/volumes" Jan 05 20:32:16 crc kubenswrapper[4754]: I0105 20:32:16.110954 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:16 crc kubenswrapper[4754]: I0105 20:32:16.904334 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerStarted","Data":"22225b3708eec2d2eae6b35aa82a35432ebb85b617c2567fcc29d5ccf1474a07"} Jan 05 20:32:17 crc kubenswrapper[4754]: I0105 20:32:17.021725 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:32:17 crc kubenswrapper[4754]: I0105 20:32:17.932332 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerStarted","Data":"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9"} Jan 05 20:32:17 crc kubenswrapper[4754]: I0105 20:32:17.933645 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerStarted","Data":"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2"} Jan 05 20:32:18 crc kubenswrapper[4754]: I0105 20:32:18.952382 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerStarted","Data":"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5"} Jan 05 20:32:19 crc kubenswrapper[4754]: I0105 20:32:19.588376 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:32:19 crc kubenswrapper[4754]: E0105 20:32:19.588843 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:32:19 crc kubenswrapper[4754]: I0105 20:32:19.967424 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerStarted","Data":"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360"} Jan 05 20:32:19 crc kubenswrapper[4754]: I0105 20:32:19.968772 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:32:19 crc kubenswrapper[4754]: I0105 20:32:19.996247 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5153947479999998 podStartE2EDuration="4.996230646s" podCreationTimestamp="2026-01-05 20:32:15 +0000 UTC" firstStartedPulling="2026-01-05 20:32:16.099719884 +0000 UTC m=+1622.808903748" lastFinishedPulling="2026-01-05 20:32:19.580555772 +0000 UTC m=+1626.289739646" observedRunningTime="2026-01-05 20:32:19.992820667 +0000 UTC m=+1626.702004541" watchObservedRunningTime="2026-01-05 20:32:19.996230646 +0000 UTC m=+1626.705414520" Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.263030 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.848559 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-vpqrq"] Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.860995 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-vpqrq"] Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.949659 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-w78td"] Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.951637 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-w78td" Jan 05 20:32:22 crc kubenswrapper[4754]: I0105 20:32:22.977981 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-w78td"] Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.034257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.034364 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r66k\" (UniqueName: \"kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.034722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.136313 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.136426 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.136463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r66k\" (UniqueName: \"kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.142249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.150493 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r66k\" (UniqueName: \"kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.150666 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data\") pod \"heat-db-sync-w78td\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.279623 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-w78td" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.602530 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3521f408-9c1b-440b-b7c1-fdc7058f9eb3" path="/var/lib/kubelet/pods/3521f408-9c1b-440b-b7c1-fdc7058f9eb3/volumes" Jan 05 20:32:23 crc kubenswrapper[4754]: I0105 20:32:23.920676 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-w78td"] Jan 05 20:32:24 crc kubenswrapper[4754]: I0105 20:32:24.049277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-w78td" event={"ID":"4bc297f2-024c-45f7-97f5-7c360061a2c2","Type":"ContainerStarted","Data":"995ca5b5c50f85bb6ec6df958b4da8ab409ae81c3104c47a8377657597e79be3"} Jan 05 20:32:24 crc kubenswrapper[4754]: I0105 20:32:24.899843 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:26 crc kubenswrapper[4754]: I0105 20:32:26.004721 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:28 crc kubenswrapper[4754]: I0105 20:32:28.543711 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:28 crc kubenswrapper[4754]: I0105 20:32:28.544215 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-central-agent" containerID="cri-o://4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2" gracePeriod=30 Jan 05 20:32:28 crc kubenswrapper[4754]: I0105 20:32:28.544278 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="proxy-httpd" containerID="cri-o://0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360" gracePeriod=30 Jan 05 20:32:28 crc kubenswrapper[4754]: I0105 20:32:28.544345 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="sg-core" containerID="cri-o://f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5" gracePeriod=30 Jan 05 20:32:28 crc kubenswrapper[4754]: I0105 20:32:28.544280 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-notification-agent" containerID="cri-o://bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9" gracePeriod=30 Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126217 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerID="0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360" exitCode=0 Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126254 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerID="f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5" exitCode=2 Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126280 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerID="4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2" exitCode=0 Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126315 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerDied","Data":"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360"} Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126344 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerDied","Data":"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5"} Jan 05 20:32:29 crc kubenswrapper[4754]: I0105 20:32:29.126357 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerDied","Data":"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2"} Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.004248 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.133306 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.133679 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.133808 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.133919 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134018 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134208 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v22zs\" (UniqueName: \"kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134441 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134369 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.134871 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts\") pod \"a3b05429-7096-4766-9d26-93d8b2d0af8e\" (UID: \"a3b05429-7096-4766-9d26-93d8b2d0af8e\") " Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.135850 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.136094 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3b05429-7096-4766-9d26-93d8b2d0af8e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.142468 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts" (OuterVolumeSpecName: "scripts") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.149078 4754 generic.go:334] "Generic (PLEG): container finished" podID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerID="bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9" exitCode=0 Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.149227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerDied","Data":"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9"} Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.149332 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3b05429-7096-4766-9d26-93d8b2d0af8e","Type":"ContainerDied","Data":"22225b3708eec2d2eae6b35aa82a35432ebb85b617c2567fcc29d5ccf1474a07"} Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.149448 4754 scope.go:117] "RemoveContainer" containerID="0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.149672 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.153724 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs" (OuterVolumeSpecName: "kube-api-access-v22zs") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "kube-api-access-v22zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.196215 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.238125 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.238171 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v22zs\" (UniqueName: \"kubernetes.io/projected/a3b05429-7096-4766-9d26-93d8b2d0af8e-kube-api-access-v22zs\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.238185 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.277418 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.290735 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.303600 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="rabbitmq" containerID="cri-o://3dfe593ba24fbe4eb931706d6c55f3e002f19127a5ceeff542df9626a8d2101f" gracePeriod=604796 Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.342873 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.342904 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.347982 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data" (OuterVolumeSpecName: "config-data") pod "a3b05429-7096-4766-9d26-93d8b2d0af8e" (UID: "a3b05429-7096-4766-9d26-93d8b2d0af8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.350941 4754 scope.go:117] "RemoveContainer" containerID="f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.390846 4754 scope.go:117] "RemoveContainer" containerID="bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.445916 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3b05429-7096-4766-9d26-93d8b2d0af8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.462094 4754 scope.go:117] "RemoveContainer" containerID="4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.521755 4754 scope.go:117] "RemoveContainer" containerID="0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.521857 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.523842 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360\": container with ID starting with 0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360 not found: ID does not exist" containerID="0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.523886 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360"} err="failed to get container status \"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360\": rpc error: code = NotFound desc = could not find container \"0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360\": container with ID starting with 0d804b028acea9c762738b0926d87d107f3d5db0fbe5e60f63e78449d2b0e360 not found: ID does not exist" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.523916 4754 scope.go:117] "RemoveContainer" containerID="f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.524404 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5\": container with ID starting with f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5 not found: ID does not exist" containerID="f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.524440 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5"} err="failed to get container status \"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5\": rpc error: code = NotFound desc = could not find container \"f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5\": container with ID starting with f3877ff08edf2d29fbd68eb3b3781afd320c4b85f441f5725814a29505eab3e5 not found: ID does not exist" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.524517 4754 scope.go:117] "RemoveContainer" containerID="bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.526377 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9\": container with ID starting with bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9 not found: ID does not exist" containerID="bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.526411 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9"} err="failed to get container status \"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9\": rpc error: code = NotFound desc = could not find container \"bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9\": container with ID starting with bbc3ddf58406c3b59799c93b16830b8544a4643ffe6f9c5f9c7a1054e11eb1d9 not found: ID does not exist" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.526425 4754 scope.go:117] "RemoveContainer" containerID="4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.530187 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2\": container with ID starting with 4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2 not found: ID does not exist" containerID="4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.530214 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2"} err="failed to get container status \"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2\": rpc error: code = NotFound desc = could not find container \"4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2\": container with ID starting with 4c790b4171410d2b21af5eb83af782e23cc967ec1474ea1ee45d0194ee9deac2 not found: ID does not exist" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.542353 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.549818 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.550247 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="proxy-httpd" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550266 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="proxy-httpd" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.550281 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-notification-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550301 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-notification-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.550337 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-central-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550344 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-central-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.550352 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="sg-core" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550359 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="sg-core" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550642 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-notification-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550661 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="sg-core" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550680 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="ceilometer-central-agent" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.550691 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" containerName="proxy-httpd" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.553079 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.561686 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.562007 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.568837 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.585517 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.589486 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:32:30 crc kubenswrapper[4754]: E0105 20:32:30.589856 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.597377 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" containerID="cri-o://521c46edf8526b075d1653e40b8f1d1cbb08156c2a8dc728ff6dfb358d04e44b" gracePeriod=604795 Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-config-data\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756582 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756621 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqkp\" (UniqueName: \"kubernetes.io/projected/68c442e4-0c24-4351-84b7-ccda8b09ea2c-kube-api-access-rzqkp\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756649 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756688 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756727 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-run-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756755 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-log-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.756797 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-scripts\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.859520 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-scripts\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.860500 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-config-data\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.860687 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.860808 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqkp\" (UniqueName: \"kubernetes.io/projected/68c442e4-0c24-4351-84b7-ccda8b09ea2c-kube-api-access-rzqkp\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.860914 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.861312 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.861440 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-run-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.861543 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-log-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.862330 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-log-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.862705 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68c442e4-0c24-4351-84b7-ccda8b09ea2c-run-httpd\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.865191 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.865401 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-scripts\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.865445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.866028 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-config-data\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.873416 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c442e4-0c24-4351-84b7-ccda8b09ea2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:30 crc kubenswrapper[4754]: I0105 20:32:30.879585 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqkp\" (UniqueName: \"kubernetes.io/projected/68c442e4-0c24-4351-84b7-ccda8b09ea2c-kube-api-access-rzqkp\") pod \"ceilometer-0\" (UID: \"68c442e4-0c24-4351-84b7-ccda8b09ea2c\") " pod="openstack/ceilometer-0" Jan 05 20:32:31 crc kubenswrapper[4754]: I0105 20:32:31.175487 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 20:32:31 crc kubenswrapper[4754]: I0105 20:32:31.607083 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b05429-7096-4766-9d26-93d8b2d0af8e" path="/var/lib/kubelet/pods/a3b05429-7096-4766-9d26-93d8b2d0af8e/volumes" Jan 05 20:32:31 crc kubenswrapper[4754]: I0105 20:32:31.710331 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 20:32:31 crc kubenswrapper[4754]: W0105 20:32:31.735252 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c442e4_0c24_4351_84b7_ccda8b09ea2c.slice/crio-ad8511284761c73c239b3bd8aae5f774c351b23440397830246244ca36f359cc WatchSource:0}: Error finding container ad8511284761c73c239b3bd8aae5f774c351b23440397830246244ca36f359cc: Status 404 returned error can't find the container with id ad8511284761c73c239b3bd8aae5f774c351b23440397830246244ca36f359cc Jan 05 20:32:32 crc kubenswrapper[4754]: I0105 20:32:32.199669 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"ad8511284761c73c239b3bd8aae5f774c351b23440397830246244ca36f359cc"} Jan 05 20:32:37 crc kubenswrapper[4754]: I0105 20:32:37.528269 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 05 20:32:37 crc kubenswrapper[4754]: I0105 20:32:37.607499 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.295474 4754 generic.go:334] "Generic (PLEG): container finished" podID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerID="521c46edf8526b075d1653e40b8f1d1cbb08156c2a8dc728ff6dfb358d04e44b" exitCode=0 Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.295961 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerDied","Data":"521c46edf8526b075d1653e40b8f1d1cbb08156c2a8dc728ff6dfb358d04e44b"} Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.300955 4754 generic.go:334] "Generic (PLEG): container finished" podID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerID="3dfe593ba24fbe4eb931706d6c55f3e002f19127a5ceeff542df9626a8d2101f" exitCode=0 Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.301014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerDied","Data":"3dfe593ba24fbe4eb931706d6c55f3e002f19127a5ceeff542df9626a8d2101f"} Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.606890 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.609062 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.611466 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.632058 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.717850 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpcb\" (UniqueName: \"kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.717901 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.717928 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.717971 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.718061 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.718106 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.718140 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.824414 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpcb\" (UniqueName: \"kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.846185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.846451 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.846598 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.846890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.846994 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.847069 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.848533 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.848839 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.849115 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.849148 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.849879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.850349 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.853301 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpcb\" (UniqueName: \"kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb\") pod \"dnsmasq-dns-594cb89c79-ld2ss\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:38 crc kubenswrapper[4754]: I0105 20:32:38.946863 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.330758 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.336815 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412508 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412568 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412660 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412746 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412795 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx79f\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2njdp\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.412954 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.413005 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.413049 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.413134 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.413817 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.413899 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.414218 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.414283 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.415074 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.415152 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.415195 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.415240 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.415287 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.416451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"79eddb76-2d9c-40cc-97e7-6c186950168c","Type":"ContainerDied","Data":"5a4be78a9f38f0affd5d6693b41466cd4534b932e6f4d08e41c14ef0c26916be"} Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.416579 4754 scope.go:117] "RemoveContainer" containerID="521c46edf8526b075d1653e40b8f1d1cbb08156c2a8dc728ff6dfb358d04e44b" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.416748 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.416899 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls\") pod \"79eddb76-2d9c-40cc-97e7-6c186950168c\" (UID: \"79eddb76-2d9c-40cc-97e7-6c186950168c\") " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.417507 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.418196 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.427987 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info" (OuterVolumeSpecName: "pod-info") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.434636 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.435523 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.442894 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.442929 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/79eddb76-2d9c-40cc-97e7-6c186950168c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.442944 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.442958 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.442971 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/79eddb76-2d9c-40cc-97e7-6c186950168c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.448313 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.451696 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.452076 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.458479 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info" (OuterVolumeSpecName: "pod-info") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.463594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.467979 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9","Type":"ContainerDied","Data":"c0f682775338f75427619b23aca1a7442be4cb48fc973b09076885d051d840a3"} Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.468128 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.474806 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f" (OuterVolumeSpecName: "kube-api-access-bx79f") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "kube-api-access-bx79f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.483978 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp" (OuterVolumeSpecName: "kube-api-access-2njdp") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "kube-api-access-2njdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.542187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.543851 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1" (OuterVolumeSpecName: "persistence") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "pvc-941303c6-1188-432c-b6ab-fa01413459d1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.544003 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: E0105 20:32:41.548157 4754 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/vol_data.json]: open /var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\" (UID: \"dcd9c022-23d9-484d-bc2b-dddf74e5e3f9\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/vol_data.json]: open /var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes/kubernetes.io~csi/pvc-941303c6-1188-432c-b6ab-fa01413459d1/vol_data.json: no such file or directory" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549801 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549829 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") on node \"crc\" " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549841 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549852 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549861 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549870 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549878 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549886 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549895 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx79f\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-kube-api-access-bx79f\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.549903 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2njdp\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-kube-api-access-2njdp\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.563999 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data" (OuterVolumeSpecName: "config-data") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.576771 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018" (OuterVolumeSpecName: "persistence") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "pvc-a317a64c-a170-4999-a409-142244ef1018". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.616393 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data" (OuterVolumeSpecName: "config-data") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.624079 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf" (OuterVolumeSpecName: "server-conf") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.633090 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.633394 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-941303c6-1188-432c-b6ab-fa01413459d1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1") on node "crc" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.654689 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.659801 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") on node \"crc\" " Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.659824 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.659840 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.659850 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/79eddb76-2d9c-40cc-97e7-6c186950168c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.686670 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf" (OuterVolumeSpecName: "server-conf") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.738085 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.738418 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a317a64c-a170-4999-a409-142244ef1018" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018") on node "crc" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.753236 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "79eddb76-2d9c-40cc-97e7-6c186950168c" (UID: "79eddb76-2d9c-40cc-97e7-6c186950168c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.763344 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.763377 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.763392 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/79eddb76-2d9c-40cc-97e7-6c186950168c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.784769 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" (UID: "dcd9c022-23d9-484d-bc2b-dddf74e5e3f9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:41 crc kubenswrapper[4754]: I0105 20:32:41.868060 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.078959 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.103831 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.125504 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:42 crc kubenswrapper[4754]: E0105 20:32:42.126204 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.126220 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: E0105 20:32:42.126242 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="setup-container" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.126249 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="setup-container" Jan 05 20:32:42 crc kubenswrapper[4754]: E0105 20:32:42.126263 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="setup-container" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.126269 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="setup-container" Jan 05 20:32:42 crc kubenswrapper[4754]: E0105 20:32:42.126282 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.126454 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.127086 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.127106 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" containerName="rabbitmq" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.129724 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.156542 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.181690 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.184652 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185235 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185425 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185678 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185793 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-config-data\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.185927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.186008 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.186049 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.186263 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf86h\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-kube-api-access-cf86h\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.186329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.211759 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.235914 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.239225 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.241196 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.241360 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.241405 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2d8nv" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.241503 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.241890 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.242000 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.243223 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.263963 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288550 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288640 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288663 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288715 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288766 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288804 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288822 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-config-data\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288850 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288868 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad41aacf-9d0a-42e2-b3cf-de51001540e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288912 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288932 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288957 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.288983 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289069 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf86h\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-kube-api-access-cf86h\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289121 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289139 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289177 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fs2\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-kube-api-access-49fs2\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289202 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad41aacf-9d0a-42e2-b3cf-de51001540e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.289218 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.290460 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-server-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.296572 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.296924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-config-data\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.299200 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-pod-info\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.299661 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.299807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.299879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.304034 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.304221 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bddf136c5148c00231b8778925e81fb3b19f5c884c1d09dc7062258446bf5b2f/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.304733 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.310485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf86h\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-kube-api-access-cf86h\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.311076 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cca29317-4bf0-40f2-aeff-1cbb68fb9cd2-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.392171 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393153 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393276 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393386 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad41aacf-9d0a-42e2-b3cf-de51001540e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393487 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393589 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393689 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393805 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.393901 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.394001 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49fs2\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-kube-api-access-49fs2\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.394092 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad41aacf-9d0a-42e2-b3cf-de51001540e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.395278 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.397554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.398063 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.398792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad41aacf-9d0a-42e2-b3cf-de51001540e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.399372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad41aacf-9d0a-42e2-b3cf-de51001540e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.399582 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.400923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad41aacf-9d0a-42e2-b3cf-de51001540e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.418884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49fs2\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-kube-api-access-49fs2\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.420044 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.421906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad41aacf-9d0a-42e2-b3cf-de51001540e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.424587 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.424633 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/beb702a1bac8a1e4b135b92121e9b2f89f2a81cf17aea314f0809e88e0e10ffd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.437499 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a317a64c-a170-4999-a409-142244ef1018\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a317a64c-a170-4999-a409-142244ef1018\") pod \"rabbitmq-server-2\" (UID: \"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2\") " pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.464476 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.512182 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-941303c6-1188-432c-b6ab-fa01413459d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-941303c6-1188-432c-b6ab-fa01413459d1\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad41aacf-9d0a-42e2-b3cf-de51001540e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:42 crc kubenswrapper[4754]: I0105 20:32:42.571491 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:32:43 crc kubenswrapper[4754]: I0105 20:32:43.599315 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:32:43 crc kubenswrapper[4754]: E0105 20:32:43.600006 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:32:43 crc kubenswrapper[4754]: I0105 20:32:43.601401 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79eddb76-2d9c-40cc-97e7-6c186950168c" path="/var/lib/kubelet/pods/79eddb76-2d9c-40cc-97e7-6c186950168c/volumes" Jan 05 20:32:43 crc kubenswrapper[4754]: I0105 20:32:43.702926 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd9c022-23d9-484d-bc2b-dddf74e5e3f9" path="/var/lib/kubelet/pods/dcd9c022-23d9-484d-bc2b-dddf74e5e3f9/volumes" Jan 05 20:32:47 crc kubenswrapper[4754]: E0105 20:32:47.811620 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 05 20:32:47 crc kubenswrapper[4754]: E0105 20:32:47.811892 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 05 20:32:47 crc kubenswrapper[4754]: E0105 20:32:47.812015 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6r66k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-w78td_openstack(4bc297f2-024c-45f7-97f5-7c360061a2c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:32:47 crc kubenswrapper[4754]: E0105 20:32:47.813188 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-w78td" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" Jan 05 20:32:48 crc kubenswrapper[4754]: E0105 20:32:48.562471 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-w78td" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" Jan 05 20:32:49 crc kubenswrapper[4754]: I0105 20:32:49.478541 4754 scope.go:117] "RemoveContainer" containerID="36fc8028157ef4a2ce0b7b8c867c77cbcc311d587be7596e32628c0d80935d81" Jan 05 20:32:49 crc kubenswrapper[4754]: I0105 20:32:49.917183 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:32:49 crc kubenswrapper[4754]: I0105 20:32:49.977642 4754 scope.go:117] "RemoveContainer" containerID="3dfe593ba24fbe4eb931706d6c55f3e002f19127a5ceeff542df9626a8d2101f" Jan 05 20:32:49 crc kubenswrapper[4754]: E0105 20:32:49.979336 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 05 20:32:49 crc kubenswrapper[4754]: E0105 20:32:49.979394 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 05 20:32:49 crc kubenswrapper[4754]: E0105 20:32:49.979597 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b5h7bhfbh86h575h56h566h696hch97h645hddh5fch5b6h5fdhddhd8h5dh5cbh58bh84hffh669h7fh698h5fbh9fh8chf7hcbhd4h94q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzqkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(68c442e4-0c24-4351-84b7-ccda8b09ea2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.175610 4754 scope.go:117] "RemoveContainer" containerID="233156993cda4d7cb82013b273c7957d5d9b756b648ed616dabcd1e40d5d32e1" Jan 05 20:32:50 crc kubenswrapper[4754]: W0105 20:32:50.452552 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca29317_4bf0_40f2_aeff_1cbb68fb9cd2.slice/crio-8cfaa14c9bb79584d704f8e2fb6bd79247ec12d38ed757c7acf44e688714d7e7 WatchSource:0}: Error finding container 8cfaa14c9bb79584d704f8e2fb6bd79247ec12d38ed757c7acf44e688714d7e7: Status 404 returned error can't find the container with id 8cfaa14c9bb79584d704f8e2fb6bd79247ec12d38ed757c7acf44e688714d7e7 Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.453712 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.560274 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.589106 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad41aacf-9d0a-42e2-b3cf-de51001540e2","Type":"ContainerStarted","Data":"4455e4eb5c3ec16554088d6af7e4662ac4dac8c5c9d60ba93c33e612b4b3bd02"} Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.595805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2","Type":"ContainerStarted","Data":"8cfaa14c9bb79584d704f8e2fb6bd79247ec12d38ed757c7acf44e688714d7e7"} Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.597270 4754 generic.go:334] "Generic (PLEG): container finished" podID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerID="2a82618040d7d0e501cae03edc8c2b1f652b1a19123f196b57936a222f00dc3e" exitCode=0 Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.597358 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" event={"ID":"b2099ca9-824d-4e4c-850a-417c6335f7ba","Type":"ContainerDied","Data":"2a82618040d7d0e501cae03edc8c2b1f652b1a19123f196b57936a222f00dc3e"} Jan 05 20:32:50 crc kubenswrapper[4754]: I0105 20:32:50.597385 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" event={"ID":"b2099ca9-824d-4e4c-850a-417c6335f7ba","Type":"ContainerStarted","Data":"7d4e6ab50d129d1685766f7daeaea3cacb848ebc708627f565fb759825c6eeb4"} Jan 05 20:32:51 crc kubenswrapper[4754]: I0105 20:32:51.628465 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"9b2bf97c0b6fa37ff8f4e00712e29b30f306d1611f53dd0b1b0626db030fe77a"} Jan 05 20:32:51 crc kubenswrapper[4754]: I0105 20:32:51.631118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" event={"ID":"b2099ca9-824d-4e4c-850a-417c6335f7ba","Type":"ContainerStarted","Data":"2cd42d3d4bca142673cb0c224dc95fd76e5798603717a358a358646421d72408"} Jan 05 20:32:51 crc kubenswrapper[4754]: I0105 20:32:51.631270 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:51 crc kubenswrapper[4754]: I0105 20:32:51.664949 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" podStartSLOduration=13.664927736 podStartE2EDuration="13.664927736s" podCreationTimestamp="2026-01-05 20:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:32:51.654839501 +0000 UTC m=+1658.364023375" watchObservedRunningTime="2026-01-05 20:32:51.664927736 +0000 UTC m=+1658.374111610" Jan 05 20:32:52 crc kubenswrapper[4754]: I0105 20:32:52.651471 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"3677dd87a87ae066bec81e414f1dbacc6f50dc1b3811747dbe629ac9be0f5018"} Jan 05 20:32:53 crc kubenswrapper[4754]: I0105 20:32:53.666905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2","Type":"ContainerStarted","Data":"6480ffb109e42b90275fd22e68076a0cbb4d9d13dd0c58788fe001ef11c04e99"} Jan 05 20:32:53 crc kubenswrapper[4754]: I0105 20:32:53.671658 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad41aacf-9d0a-42e2-b3cf-de51001540e2","Type":"ContainerStarted","Data":"aa00312dd587920111982a56c4c2fed1a124682ab598554398fca5def51eb1d0"} Jan 05 20:32:53 crc kubenswrapper[4754]: E0105 20:32:53.897880 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" Jan 05 20:32:54 crc kubenswrapper[4754]: I0105 20:32:54.689149 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"5eef515aa2e25356cd507df957c4acd08ed804418409e2016b7f06f064788e0b"} Jan 05 20:32:54 crc kubenswrapper[4754]: I0105 20:32:54.690923 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 20:32:54 crc kubenswrapper[4754]: E0105 20:32:54.692563 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" Jan 05 20:32:55 crc kubenswrapper[4754]: E0105 20:32:55.708030 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" Jan 05 20:32:58 crc kubenswrapper[4754]: I0105 20:32:58.589612 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:32:58 crc kubenswrapper[4754]: E0105 20:32:58.590477 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:32:58 crc kubenswrapper[4754]: I0105 20:32:58.948475 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.045095 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.045607 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="dnsmasq-dns" containerID="cri-o://b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6" gracePeriod=10 Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.165896 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-q9vv9"] Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.170217 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.218091 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-q9vv9"] Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.314397 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.314933 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.315018 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.315239 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-config\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.315442 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.315565 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss9g8\" (UniqueName: \"kubernetes.io/projected/76d33f28-9ba4-443d-b136-bd3458c56d95-kube-api-access-ss9g8\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.315680 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-config\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss9g8\" (UniqueName: \"kubernetes.io/projected/76d33f28-9ba4-443d-b136-bd3458c56d95-kube-api-access-ss9g8\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418446 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418485 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418507 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.418527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.421211 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.421736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-config\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.427601 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.427620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.427721 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.428118 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76d33f28-9ba4-443d-b136-bd3458c56d95-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.444714 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss9g8\" (UniqueName: \"kubernetes.io/projected/76d33f28-9ba4-443d-b136-bd3458c56d95-kube-api-access-ss9g8\") pod \"dnsmasq-dns-5596c69fcc-q9vv9\" (UID: \"76d33f28-9ba4-443d-b136-bd3458c56d95\") " pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.559858 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.797093 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.806380 4754 generic.go:334] "Generic (PLEG): container finished" podID="31e86022-3de4-4405-9bf6-47646f8368e5" containerID="b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6" exitCode=0 Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.806424 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" event={"ID":"31e86022-3de4-4405-9bf6-47646f8368e5","Type":"ContainerDied","Data":"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6"} Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.806452 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" event={"ID":"31e86022-3de4-4405-9bf6-47646f8368e5","Type":"ContainerDied","Data":"afaf7edb386f95deb1953fe8b6de62e1042e1dee669b0c2b0f9c35ff873bd71c"} Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.806482 4754 scope.go:117] "RemoveContainer" containerID="b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.806636 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-ddt4h" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.859227 4754 scope.go:117] "RemoveContainer" containerID="c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.904481 4754 scope.go:117] "RemoveContainer" containerID="b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6" Jan 05 20:32:59 crc kubenswrapper[4754]: E0105 20:32:59.905224 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6\": container with ID starting with b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6 not found: ID does not exist" containerID="b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.905266 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6"} err="failed to get container status \"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6\": rpc error: code = NotFound desc = could not find container \"b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6\": container with ID starting with b41223680ee5445b5d6098152dd02cdc34889c9390673a3074029a838475e1a6 not found: ID does not exist" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.905315 4754 scope.go:117] "RemoveContainer" containerID="c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e" Jan 05 20:32:59 crc kubenswrapper[4754]: E0105 20:32:59.906272 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e\": container with ID starting with c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e not found: ID does not exist" containerID="c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.906306 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e"} err="failed to get container status \"c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e\": rpc error: code = NotFound desc = could not find container \"c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e\": container with ID starting with c68dd6bb4403a7dd5c45eee54b98a2261e90ff4bb0c15f5beb8025d32683df6e not found: ID does not exist" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.940458 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.940713 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.941586 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.941952 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.941999 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.942085 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bflln\" (UniqueName: \"kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln\") pod \"31e86022-3de4-4405-9bf6-47646f8368e5\" (UID: \"31e86022-3de4-4405-9bf6-47646f8368e5\") " Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.948512 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln" (OuterVolumeSpecName: "kube-api-access-bflln") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "kube-api-access-bflln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.995654 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config" (OuterVolumeSpecName: "config") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:32:59 crc kubenswrapper[4754]: I0105 20:32:59.995753 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.004221 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.004935 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.012972 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31e86022-3de4-4405-9bf6-47646f8368e5" (UID: "31e86022-3de4-4405-9bf6-47646f8368e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045168 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045203 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045212 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045221 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045230 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bflln\" (UniqueName: \"kubernetes.io/projected/31e86022-3de4-4405-9bf6-47646f8368e5-kube-api-access-bflln\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.045239 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31e86022-3de4-4405-9bf6-47646f8368e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.151027 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.184232 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-ddt4h"] Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.197050 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-q9vv9"] Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.818755 4754 generic.go:334] "Generic (PLEG): container finished" podID="76d33f28-9ba4-443d-b136-bd3458c56d95" containerID="41d920b64f8db0614a2f5f8a2f3e7a00c53e0687cc1e709b397442cbe55c1ab8" exitCode=0 Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.818811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" event={"ID":"76d33f28-9ba4-443d-b136-bd3458c56d95","Type":"ContainerDied","Data":"41d920b64f8db0614a2f5f8a2f3e7a00c53e0687cc1e709b397442cbe55c1ab8"} Jan 05 20:33:00 crc kubenswrapper[4754]: I0105 20:33:00.819056 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" event={"ID":"76d33f28-9ba4-443d-b136-bd3458c56d95","Type":"ContainerStarted","Data":"8a2ad13b3dde1966507f363563a5fe36c44adae504dc3a2321637c42464ec364"} Jan 05 20:33:01 crc kubenswrapper[4754]: I0105 20:33:01.607379 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" path="/var/lib/kubelet/pods/31e86022-3de4-4405-9bf6-47646f8368e5/volumes" Jan 05 20:33:01 crc kubenswrapper[4754]: I0105 20:33:01.841606 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" event={"ID":"76d33f28-9ba4-443d-b136-bd3458c56d95","Type":"ContainerStarted","Data":"58f42dfcc0f1db8db227511dd25e64d06ab786cb0c1f520da35647e5b29d712b"} Jan 05 20:33:01 crc kubenswrapper[4754]: I0105 20:33:01.844486 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:33:01 crc kubenswrapper[4754]: I0105 20:33:01.870865 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" podStartSLOduration=2.870849506 podStartE2EDuration="2.870849506s" podCreationTimestamp="2026-01-05 20:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:33:01.866606215 +0000 UTC m=+1668.575790099" watchObservedRunningTime="2026-01-05 20:33:01.870849506 +0000 UTC m=+1668.580033380" Jan 05 20:33:03 crc kubenswrapper[4754]: I0105 20:33:03.886035 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-w78td" event={"ID":"4bc297f2-024c-45f7-97f5-7c360061a2c2","Type":"ContainerStarted","Data":"856a77bc69e2a0060d199de59976b99551fdf13ac1799133226c91ff4851cd00"} Jan 05 20:33:03 crc kubenswrapper[4754]: I0105 20:33:03.916765 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-w78td" podStartSLOduration=3.047716695 podStartE2EDuration="41.916732215s" podCreationTimestamp="2026-01-05 20:32:22 +0000 UTC" firstStartedPulling="2026-01-05 20:32:23.921870513 +0000 UTC m=+1630.631054407" lastFinishedPulling="2026-01-05 20:33:02.790886043 +0000 UTC m=+1669.500069927" observedRunningTime="2026-01-05 20:33:03.91232948 +0000 UTC m=+1670.621513364" watchObservedRunningTime="2026-01-05 20:33:03.916732215 +0000 UTC m=+1670.625916099" Jan 05 20:33:05 crc kubenswrapper[4754]: I0105 20:33:05.315672 4754 scope.go:117] "RemoveContainer" containerID="e3b601f5b9c17bf008e26967bd4ce50d75528faf76c4af07b43a2fabc61bc22c" Jan 05 20:33:05 crc kubenswrapper[4754]: I0105 20:33:05.348421 4754 scope.go:117] "RemoveContainer" containerID="c0595bbd15750527785046a814e9616ce55865a4b6d1ef0785ff18150e721542" Jan 05 20:33:05 crc kubenswrapper[4754]: I0105 20:33:05.452502 4754 scope.go:117] "RemoveContainer" containerID="390f00b23e9921442a154e36532f1cc7a6b71f4d7e46a76dc4eb1cd5d9673bb8" Jan 05 20:33:05 crc kubenswrapper[4754]: I0105 20:33:05.927784 4754 generic.go:334] "Generic (PLEG): container finished" podID="4bc297f2-024c-45f7-97f5-7c360061a2c2" containerID="856a77bc69e2a0060d199de59976b99551fdf13ac1799133226c91ff4851cd00" exitCode=0 Jan 05 20:33:05 crc kubenswrapper[4754]: I0105 20:33:05.927858 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-w78td" event={"ID":"4bc297f2-024c-45f7-97f5-7c360061a2c2","Type":"ContainerDied","Data":"856a77bc69e2a0060d199de59976b99551fdf13ac1799133226c91ff4851cd00"} Jan 05 20:33:06 crc kubenswrapper[4754]: I0105 20:33:06.616688 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.437757 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-w78td" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.555542 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data\") pod \"4bc297f2-024c-45f7-97f5-7c360061a2c2\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.555666 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r66k\" (UniqueName: \"kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k\") pod \"4bc297f2-024c-45f7-97f5-7c360061a2c2\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.555764 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle\") pod \"4bc297f2-024c-45f7-97f5-7c360061a2c2\" (UID: \"4bc297f2-024c-45f7-97f5-7c360061a2c2\") " Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.562725 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k" (OuterVolumeSpecName: "kube-api-access-6r66k") pod "4bc297f2-024c-45f7-97f5-7c360061a2c2" (UID: "4bc297f2-024c-45f7-97f5-7c360061a2c2"). InnerVolumeSpecName "kube-api-access-6r66k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.591040 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc297f2-024c-45f7-97f5-7c360061a2c2" (UID: "4bc297f2-024c-45f7-97f5-7c360061a2c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.659221 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r66k\" (UniqueName: \"kubernetes.io/projected/4bc297f2-024c-45f7-97f5-7c360061a2c2-kube-api-access-6r66k\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.659255 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.670978 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data" (OuterVolumeSpecName: "config-data") pod "4bc297f2-024c-45f7-97f5-7c360061a2c2" (UID: "4bc297f2-024c-45f7-97f5-7c360061a2c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.765042 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc297f2-024c-45f7-97f5-7c360061a2c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.966573 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"0a547e9ff025f817135f2b3637e5774233ea414d82ed97c7e8144c6754b93c55"} Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.972972 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-w78td" event={"ID":"4bc297f2-024c-45f7-97f5-7c360061a2c2","Type":"ContainerDied","Data":"995ca5b5c50f85bb6ec6df958b4da8ab409ae81c3104c47a8377657597e79be3"} Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.973057 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995ca5b5c50f85bb6ec6df958b4da8ab409ae81c3104c47a8377657597e79be3" Jan 05 20:33:07 crc kubenswrapper[4754]: I0105 20:33:07.973157 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-w78td" Jan 05 20:33:08 crc kubenswrapper[4754]: I0105 20:33:08.010689 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.746670231 podStartE2EDuration="38.010670434s" podCreationTimestamp="2026-01-05 20:32:30 +0000 UTC" firstStartedPulling="2026-01-05 20:32:31.756722737 +0000 UTC m=+1638.465906611" lastFinishedPulling="2026-01-05 20:33:07.02072294 +0000 UTC m=+1673.729906814" observedRunningTime="2026-01-05 20:33:08.006233247 +0000 UTC m=+1674.715417121" watchObservedRunningTime="2026-01-05 20:33:08.010670434 +0000 UTC m=+1674.719854308" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.000333 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7c9b6fddd9-fmxb7"] Jan 05 20:33:09 crc kubenswrapper[4754]: E0105 20:33:09.001465 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="init" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.001485 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="init" Jan 05 20:33:09 crc kubenswrapper[4754]: E0105 20:33:09.001514 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" containerName="heat-db-sync" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.001523 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" containerName="heat-db-sync" Jan 05 20:33:09 crc kubenswrapper[4754]: E0105 20:33:09.001545 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="dnsmasq-dns" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.001553 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="dnsmasq-dns" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.001887 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e86022-3de4-4405-9bf6-47646f8368e5" containerName="dnsmasq-dns" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.001925 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" containerName="heat-db-sync" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.003115 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.020162 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7c9b6fddd9-fmxb7"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.071383 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-757fc694bb-c2pvk"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.073510 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.093756 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-757fc694bb-c2pvk"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.107565 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data-custom\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.107979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4bf\" (UniqueName: \"kubernetes.io/projected/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-kube-api-access-ss4bf\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.108016 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-combined-ca-bundle\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.108061 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.109729 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-59c56bf-mxnq6"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.112913 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.137077 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59c56bf-mxnq6"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.210809 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data-custom\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.211061 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data-custom\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.211979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-internal-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212109 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-combined-ca-bundle\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data-custom\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212284 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-public-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212400 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4bf\" (UniqueName: \"kubernetes.io/projected/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-kube-api-access-ss4bf\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212524 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-combined-ca-bundle\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212651 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-internal-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212877 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh476\" (UniqueName: \"kubernetes.io/projected/2a77aee6-6114-4660-88f7-d4e86ea88421-kube-api-access-kh476\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.212971 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.213049 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-combined-ca-bundle\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.213129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4qq\" (UniqueName: \"kubernetes.io/projected/ab7a9fe0-e239-4353-8cc8-90199777fb33-kube-api-access-6b4qq\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.213215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-public-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.213288 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.217965 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-combined-ca-bundle\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.218408 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data-custom\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.233181 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4bf\" (UniqueName: \"kubernetes.io/projected/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-kube-api-access-ss4bf\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.242806 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b8c52-175b-4a45-8c1a-4dc90f73be4e-config-data\") pod \"heat-engine-7c9b6fddd9-fmxb7\" (UID: \"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e\") " pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315422 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data-custom\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315498 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-internal-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-combined-ca-bundle\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315570 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data-custom\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315586 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-public-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-internal-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315695 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh476\" (UniqueName: \"kubernetes.io/projected/2a77aee6-6114-4660-88f7-d4e86ea88421-kube-api-access-kh476\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-combined-ca-bundle\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4qq\" (UniqueName: \"kubernetes.io/projected/ab7a9fe0-e239-4353-8cc8-90199777fb33-kube-api-access-6b4qq\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315773 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-public-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.315789 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.326380 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-public-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.329965 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.333159 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data-custom\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.333528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.333843 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-config-data-custom\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.335978 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh476\" (UniqueName: \"kubernetes.io/projected/2a77aee6-6114-4660-88f7-d4e86ea88421-kube-api-access-kh476\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.336690 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-internal-tls-certs\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.339085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-combined-ca-bundle\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.339243 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-combined-ca-bundle\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.341117 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-internal-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.343147 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a9fe0-e239-4353-8cc8-90199777fb33-config-data\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.343941 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a77aee6-6114-4660-88f7-d4e86ea88421-public-tls-certs\") pod \"heat-api-757fc694bb-c2pvk\" (UID: \"2a77aee6-6114-4660-88f7-d4e86ea88421\") " pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.346050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4qq\" (UniqueName: \"kubernetes.io/projected/ab7a9fe0-e239-4353-8cc8-90199777fb33-kube-api-access-6b4qq\") pod \"heat-cfnapi-59c56bf-mxnq6\" (UID: \"ab7a9fe0-e239-4353-8cc8-90199777fb33\") " pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.398170 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.472756 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.563497 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-q9vv9" Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.682233 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:33:09 crc kubenswrapper[4754]: I0105 20:33:09.682990 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="dnsmasq-dns" containerID="cri-o://2cd42d3d4bca142673cb0c224dc95fd76e5798603717a358a358646421d72408" gracePeriod=10 Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.012809 4754 generic.go:334] "Generic (PLEG): container finished" podID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerID="2cd42d3d4bca142673cb0c224dc95fd76e5798603717a358a358646421d72408" exitCode=0 Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.013265 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" event={"ID":"b2099ca9-824d-4e4c-850a-417c6335f7ba","Type":"ContainerDied","Data":"2cd42d3d4bca142673cb0c224dc95fd76e5798603717a358a358646421d72408"} Jan 05 20:33:10 crc kubenswrapper[4754]: W0105 20:33:10.034003 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a77aee6_6114_4660_88f7_d4e86ea88421.slice/crio-0c2d571c6d3ab0cc5e2c80c1fe6ba46c8e2e2c7792dcb193f9490579b5473a0c WatchSource:0}: Error finding container 0c2d571c6d3ab0cc5e2c80c1fe6ba46c8e2e2c7792dcb193f9490579b5473a0c: Status 404 returned error can't find the container with id 0c2d571c6d3ab0cc5e2c80c1fe6ba46c8e2e2c7792dcb193f9490579b5473a0c Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.034808 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-757fc694bb-c2pvk"] Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.057370 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7c9b6fddd9-fmxb7"] Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.195616 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-59c56bf-mxnq6"] Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.428869 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554147 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554219 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554335 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554435 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554517 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554572 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.554599 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvpcb\" (UniqueName: \"kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb\") pod \"b2099ca9-824d-4e4c-850a-417c6335f7ba\" (UID: \"b2099ca9-824d-4e4c-850a-417c6335f7ba\") " Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.572252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb" (OuterVolumeSpecName: "kube-api-access-wvpcb") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "kube-api-access-wvpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.663102 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvpcb\" (UniqueName: \"kubernetes.io/projected/b2099ca9-824d-4e4c-850a-417c6335f7ba-kube-api-access-wvpcb\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.667139 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.677215 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.691422 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.692684 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.719105 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config" (OuterVolumeSpecName: "config") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.766595 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2099ca9-824d-4e4c-850a-417c6335f7ba" (UID: "b2099ca9-824d-4e4c-850a-417c6335f7ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.768037 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.768083 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.768099 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.768111 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-config\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.768141 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:10 crc kubenswrapper[4754]: I0105 20:33:10.871482 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2099ca9-824d-4e4c-850a-417c6335f7ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.084308 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" event={"ID":"b2099ca9-824d-4e4c-850a-417c6335f7ba","Type":"ContainerDied","Data":"7d4e6ab50d129d1685766f7daeaea3cacb848ebc708627f565fb759825c6eeb4"} Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.084365 4754 scope.go:117] "RemoveContainer" containerID="2cd42d3d4bca142673cb0c224dc95fd76e5798603717a358a358646421d72408" Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.084510 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-ld2ss" Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.092665 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-757fc694bb-c2pvk" event={"ID":"2a77aee6-6114-4660-88f7-d4e86ea88421","Type":"ContainerStarted","Data":"0c2d571c6d3ab0cc5e2c80c1fe6ba46c8e2e2c7792dcb193f9490579b5473a0c"} Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.096070 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59c56bf-mxnq6" event={"ID":"ab7a9fe0-e239-4353-8cc8-90199777fb33","Type":"ContainerStarted","Data":"4cf3950dd465fc18954b36d98caa2e2ff4054ff9fb4ad9de324520f74409c7fa"} Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.098181 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" event={"ID":"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e","Type":"ContainerStarted","Data":"807d4bcbe12d6afe5effdf8f7e562c549ac927c338b5806fa2db9ab89d1bf7a9"} Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.098239 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" event={"ID":"9b7b8c52-175b-4a45-8c1a-4dc90f73be4e","Type":"ContainerStarted","Data":"b5dbdee8f25e3ac9058ace22a8cb23daa54594a649f6d8f138571ff303074a19"} Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.125199 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" podStartSLOduration=3.125180712 podStartE2EDuration="3.125180712s" podCreationTimestamp="2026-01-05 20:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:33:11.122734778 +0000 UTC m=+1677.831918652" watchObservedRunningTime="2026-01-05 20:33:11.125180712 +0000 UTC m=+1677.834364586" Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.149283 4754 scope.go:117] "RemoveContainer" containerID="2a82618040d7d0e501cae03edc8c2b1f652b1a19123f196b57936a222f00dc3e" Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.174856 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.187453 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-ld2ss"] Jan 05 20:33:11 crc kubenswrapper[4754]: I0105 20:33:11.605484 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" path="/var/lib/kubelet/pods/b2099ca9-824d-4e4c-850a-417c6335f7ba/volumes" Jan 05 20:33:12 crc kubenswrapper[4754]: I0105 20:33:12.111834 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:12 crc kubenswrapper[4754]: I0105 20:33:12.589066 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:33:12 crc kubenswrapper[4754]: E0105 20:33:12.590054 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:33:13 crc kubenswrapper[4754]: I0105 20:33:13.131504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-757fc694bb-c2pvk" event={"ID":"2a77aee6-6114-4660-88f7-d4e86ea88421","Type":"ContainerStarted","Data":"c6bec45dfe06290b07294a136b161a6c9a0d8e6721266e9f0be6d3c543e90c46"} Jan 05 20:33:13 crc kubenswrapper[4754]: I0105 20:33:13.131863 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:13 crc kubenswrapper[4754]: I0105 20:33:13.183722 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-59c56bf-mxnq6" event={"ID":"ab7a9fe0-e239-4353-8cc8-90199777fb33","Type":"ContainerStarted","Data":"cfeb98e742f3d3a91f267773f44e477db36a08cd81de4f1204b385648d8652bd"} Jan 05 20:33:13 crc kubenswrapper[4754]: I0105 20:33:13.183774 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:13 crc kubenswrapper[4754]: I0105 20:33:13.185427 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-757fc694bb-c2pvk" podStartSLOduration=1.836476551 podStartE2EDuration="4.185411109s" podCreationTimestamp="2026-01-05 20:33:09 +0000 UTC" firstStartedPulling="2026-01-05 20:33:10.037690717 +0000 UTC m=+1676.746874591" lastFinishedPulling="2026-01-05 20:33:12.386625275 +0000 UTC m=+1679.095809149" observedRunningTime="2026-01-05 20:33:13.18160534 +0000 UTC m=+1679.890789214" watchObservedRunningTime="2026-01-05 20:33:13.185411109 +0000 UTC m=+1679.894594983" Jan 05 20:33:15 crc kubenswrapper[4754]: I0105 20:33:15.944666 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:33:16 crc kubenswrapper[4754]: I0105 20:33:16.073150 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:33:16 crc kubenswrapper[4754]: I0105 20:33:16.073464 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.677902 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-757fc694bb-c2pvk" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.694417 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-59c56bf-mxnq6" podStartSLOduration=9.509367141 podStartE2EDuration="11.694398255s" podCreationTimestamp="2026-01-05 20:33:09 +0000 UTC" firstStartedPulling="2026-01-05 20:33:10.209732585 +0000 UTC m=+1676.918916459" lastFinishedPulling="2026-01-05 20:33:12.394763699 +0000 UTC m=+1679.103947573" observedRunningTime="2026-01-05 20:33:13.21093778 +0000 UTC m=+1679.920121654" watchObservedRunningTime="2026-01-05 20:33:20.694398255 +0000 UTC m=+1687.403582129" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.700228 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl"] Jan 05 20:33:20 crc kubenswrapper[4754]: E0105 20:33:20.701221 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="init" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.701244 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="init" Jan 05 20:33:20 crc kubenswrapper[4754]: E0105 20:33:20.701263 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="dnsmasq-dns" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.701271 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="dnsmasq-dns" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.701572 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2099ca9-824d-4e4c-850a-417c6335f7ba" containerName="dnsmasq-dns" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.702798 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.714555 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.714674 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.714778 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.719477 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.742794 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl"] Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.808872 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.809124 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6c8f698c95-f22ns" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerName="heat-api" containerID="cri-o://e32141db2e6f2b58809030fc2a6c6df284d83e88fe4957205ddaa2f2989569f6" gracePeriod=60 Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.836798 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.836919 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.837011 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.837084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfscj\" (UniqueName: \"kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.939320 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfscj\" (UniqueName: \"kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.939415 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.939514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.939589 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.957000 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.962406 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.967623 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:20 crc kubenswrapper[4754]: I0105 20:33:20.989690 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfscj\" (UniqueName: \"kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:21 crc kubenswrapper[4754]: I0105 20:33:21.023519 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:21 crc kubenswrapper[4754]: I0105 20:33:21.024928 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-59c56bf-mxnq6" Jan 05 20:33:21 crc kubenswrapper[4754]: I0105 20:33:21.110442 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:33:21 crc kubenswrapper[4754]: I0105 20:33:21.110741 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-675955f8c4-92lhz" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerName="heat-cfnapi" containerID="cri-o://c921cb29d6ebb86cc0838144f569b5e2360c742f4671c3f11829371dd785c682" gracePeriod=60 Jan 05 20:33:21 crc kubenswrapper[4754]: I0105 20:33:21.771168 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl"] Jan 05 20:33:22 crc kubenswrapper[4754]: I0105 20:33:22.332180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" event={"ID":"fffef548-67ef-4afd-a644-9aaad154e735","Type":"ContainerStarted","Data":"53acb91f3585d44f387ce9d728254719653b2f09763513d2c640243d24284b15"} Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.371476 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-675955f8c4-92lhz" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.225:8000/healthcheck\": dial tcp 10.217.0.225:8000: connect: connection refused" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.372665 4754 generic.go:334] "Generic (PLEG): container finished" podID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerID="c921cb29d6ebb86cc0838144f569b5e2360c742f4671c3f11829371dd785c682" exitCode=0 Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.372746 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675955f8c4-92lhz" event={"ID":"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b","Type":"ContainerDied","Data":"c921cb29d6ebb86cc0838144f569b5e2360c742f4671c3f11829371dd785c682"} Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.375461 4754 generic.go:334] "Generic (PLEG): container finished" podID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerID="e32141db2e6f2b58809030fc2a6c6df284d83e88fe4957205ddaa2f2989569f6" exitCode=0 Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.375500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8f698c95-f22ns" event={"ID":"2febfb57-3d29-4415-bcae-1295fbf07b70","Type":"ContainerDied","Data":"e32141db2e6f2b58809030fc2a6c6df284d83e88fe4957205ddaa2f2989569f6"} Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.592615 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734166 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734510 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734616 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734684 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734897 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.734960 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thst4\" (UniqueName: \"kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4\") pod \"2febfb57-3d29-4415-bcae-1295fbf07b70\" (UID: \"2febfb57-3d29-4415-bcae-1295fbf07b70\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.748211 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4" (OuterVolumeSpecName: "kube-api-access-thst4") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "kube-api-access-thst4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.759999 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.805698 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.821439 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.829706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.831878 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data" (OuterVolumeSpecName: "config-data") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.841813 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.841849 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thst4\" (UniqueName: \"kubernetes.io/projected/2febfb57-3d29-4415-bcae-1295fbf07b70-kube-api-access-thst4\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.841862 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.841874 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.841890 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.884472 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2febfb57-3d29-4415-bcae-1295fbf07b70" (UID: "2febfb57-3d29-4415-bcae-1295fbf07b70"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.943258 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.943540 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.943677 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.943903 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.944617 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sdkw\" (UniqueName: \"kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.944939 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle\") pod \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\" (UID: \"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b\") " Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.946001 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febfb57-3d29-4415-bcae-1295fbf07b70-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.948690 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.949239 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw" (OuterVolumeSpecName: "kube-api-access-8sdkw") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "kube-api-access-8sdkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:24 crc kubenswrapper[4754]: I0105 20:33:24.996844 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.000883 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.012888 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data" (OuterVolumeSpecName: "config-data") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.019545 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" (UID: "f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048718 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048769 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sdkw\" (UniqueName: \"kubernetes.io/projected/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-kube-api-access-8sdkw\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048785 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048802 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048818 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.048832 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.392016 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8f698c95-f22ns" event={"ID":"2febfb57-3d29-4415-bcae-1295fbf07b70","Type":"ContainerDied","Data":"46db3f5816ddd8d47f16c656a6296c5a30e9683f46f8e14f690e56148d7eed95"} Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.392320 4754 scope.go:117] "RemoveContainer" containerID="e32141db2e6f2b58809030fc2a6c6df284d83e88fe4957205ddaa2f2989569f6" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.392315 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8f698c95-f22ns" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.394695 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675955f8c4-92lhz" event={"ID":"f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b","Type":"ContainerDied","Data":"cd566cba60b758090e7e96a99e2a28334bfd205da7a0c0a740a8606e0d36f58b"} Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.394807 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675955f8c4-92lhz" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.445074 4754 scope.go:117] "RemoveContainer" containerID="c921cb29d6ebb86cc0838144f569b5e2360c742f4671c3f11829371dd785c682" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.449319 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.464502 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-675955f8c4-92lhz"] Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.478219 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.496999 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c8f698c95-f22ns"] Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.589076 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:33:25 crc kubenswrapper[4754]: E0105 20:33:25.589367 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.600934 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" path="/var/lib/kubelet/pods/2febfb57-3d29-4415-bcae-1295fbf07b70/volumes" Jan 05 20:33:25 crc kubenswrapper[4754]: I0105 20:33:25.602377 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" path="/var/lib/kubelet/pods/f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b/volumes" Jan 05 20:33:26 crc kubenswrapper[4754]: I0105 20:33:26.417598 4754 generic.go:334] "Generic (PLEG): container finished" podID="ad41aacf-9d0a-42e2-b3cf-de51001540e2" containerID="aa00312dd587920111982a56c4c2fed1a124682ab598554398fca5def51eb1d0" exitCode=0 Jan 05 20:33:26 crc kubenswrapper[4754]: I0105 20:33:26.417753 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad41aacf-9d0a-42e2-b3cf-de51001540e2","Type":"ContainerDied","Data":"aa00312dd587920111982a56c4c2fed1a124682ab598554398fca5def51eb1d0"} Jan 05 20:33:26 crc kubenswrapper[4754]: I0105 20:33:26.422767 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2","Type":"ContainerDied","Data":"6480ffb109e42b90275fd22e68076a0cbb4d9d13dd0c58788fe001ef11c04e99"} Jan 05 20:33:26 crc kubenswrapper[4754]: I0105 20:33:26.422709 4754 generic.go:334] "Generic (PLEG): container finished" podID="cca29317-4bf0-40f2-aeff-1cbb68fb9cd2" containerID="6480ffb109e42b90275fd22e68076a0cbb4d9d13dd0c58788fe001ef11c04e99" exitCode=0 Jan 05 20:33:29 crc kubenswrapper[4754]: I0105 20:33:29.368827 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7c9b6fddd9-fmxb7" Jan 05 20:33:29 crc kubenswrapper[4754]: I0105 20:33:29.424009 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:33:29 crc kubenswrapper[4754]: I0105 20:33:29.424238 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-78955f8dfd-95rvt" podUID="ad58f356-3e52-422b-89ba-0db520cce910" containerName="heat-engine" containerID="cri-o://21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" gracePeriod=60 Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.523811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" event={"ID":"fffef548-67ef-4afd-a644-9aaad154e735","Type":"ContainerStarted","Data":"a10770f1996008538f87e8ee2f8a0397d7aa05d4812404e774223834830a9e8b"} Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.528381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad41aacf-9d0a-42e2-b3cf-de51001540e2","Type":"ContainerStarted","Data":"1637abbfb6df5d4fabc74a95c772941db96295fe705c40a261ee52c7d69435e6"} Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.528696 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.531669 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"cca29317-4bf0-40f2-aeff-1cbb68fb9cd2","Type":"ContainerStarted","Data":"65adacc52cf562177acb8a4da79cb78695e6edd19d0b3f1a2f6588bb46684a86"} Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.533166 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.541489 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" podStartSLOduration=2.513541381 podStartE2EDuration="12.541449608s" podCreationTimestamp="2026-01-05 20:33:20 +0000 UTC" firstStartedPulling="2026-01-05 20:33:21.765932661 +0000 UTC m=+1688.475116535" lastFinishedPulling="2026-01-05 20:33:31.793840888 +0000 UTC m=+1698.503024762" observedRunningTime="2026-01-05 20:33:32.540466023 +0000 UTC m=+1699.249649927" watchObservedRunningTime="2026-01-05 20:33:32.541449608 +0000 UTC m=+1699.250633522" Jan 05 20:33:32 crc kubenswrapper[4754]: I0105 20:33:32.605172 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=50.605144301 podStartE2EDuration="50.605144301s" podCreationTimestamp="2026-01-05 20:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:33:32.580626357 +0000 UTC m=+1699.289810231" watchObservedRunningTime="2026-01-05 20:33:32.605144301 +0000 UTC m=+1699.314328205" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.041372 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.041351875 podStartE2EDuration="51.041351875s" podCreationTimestamp="2026-01-05 20:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:33:32.62072584 +0000 UTC m=+1699.329909714" watchObservedRunningTime="2026-01-05 20:33:33.041351875 +0000 UTC m=+1699.750535759" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.053379 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-z2gzv"] Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.066372 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-z2gzv"] Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.163448 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-48pbv"] Jan 05 20:33:33 crc kubenswrapper[4754]: E0105 20:33:33.164362 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerName="heat-cfnapi" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.164400 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerName="heat-cfnapi" Jan 05 20:33:33 crc kubenswrapper[4754]: E0105 20:33:33.164440 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerName="heat-api" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.164453 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerName="heat-api" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.165003 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f344e5f3-1ddc-4bad-b3b7-f5d4792dee8b" containerName="heat-cfnapi" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.165045 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerName="heat-api" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.166504 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.168914 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.180326 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-48pbv"] Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.314787 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.315207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.315483 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6qn\" (UniqueName: \"kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.316181 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.418090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh6qn\" (UniqueName: \"kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.418428 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.418784 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.419686 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.423818 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.424894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.425639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.443242 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh6qn\" (UniqueName: \"kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn\") pod \"aodh-db-sync-48pbv\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.497356 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:33 crc kubenswrapper[4754]: I0105 20:33:33.601537 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d741bd-ab72-4eff-9d37-8ecf50a50698" path="/var/lib/kubelet/pods/25d741bd-ab72-4eff-9d37-8ecf50a50698/volumes" Jan 05 20:33:34 crc kubenswrapper[4754]: I0105 20:33:34.050492 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-48pbv"] Jan 05 20:33:34 crc kubenswrapper[4754]: I0105 20:33:34.280138 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6c8f698c95-f22ns" podUID="2febfb57-3d29-4415-bcae-1295fbf07b70" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.224:8004/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 20:33:34 crc kubenswrapper[4754]: I0105 20:33:34.554087 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-48pbv" event={"ID":"2b32de6d-d71d-4c68-9078-32e6fbdb6f37","Type":"ContainerStarted","Data":"d76aabf08ba246578cbb56739bb4814a91cbf634248516d5b3288d59fbd0d191"} Jan 05 20:33:37 crc kubenswrapper[4754]: E0105 20:33:37.190453 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:33:37 crc kubenswrapper[4754]: E0105 20:33:37.193388 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:33:37 crc kubenswrapper[4754]: E0105 20:33:37.196155 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 05 20:33:37 crc kubenswrapper[4754]: E0105 20:33:37.196255 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-78955f8dfd-95rvt" podUID="ad58f356-3e52-422b-89ba-0db520cce910" containerName="heat-engine" Jan 05 20:33:38 crc kubenswrapper[4754]: I0105 20:33:38.588767 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:33:38 crc kubenswrapper[4754]: E0105 20:33:38.589428 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:33:41 crc kubenswrapper[4754]: I0105 20:33:41.654911 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-48pbv" event={"ID":"2b32de6d-d71d-4c68-9078-32e6fbdb6f37","Type":"ContainerStarted","Data":"65b730c4aa36d59540ec344235231163e28f5f2bacefed511ced75dd9f9d3131"} Jan 05 20:33:41 crc kubenswrapper[4754]: I0105 20:33:41.679482 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-48pbv" podStartSLOduration=1.724218083 podStartE2EDuration="8.679460399s" podCreationTimestamp="2026-01-05 20:33:33 +0000 UTC" firstStartedPulling="2026-01-05 20:33:34.05533829 +0000 UTC m=+1700.764522164" lastFinishedPulling="2026-01-05 20:33:41.010580606 +0000 UTC m=+1707.719764480" observedRunningTime="2026-01-05 20:33:41.671977663 +0000 UTC m=+1708.381161537" watchObservedRunningTime="2026-01-05 20:33:41.679460399 +0000 UTC m=+1708.388644273" Jan 05 20:33:42 crc kubenswrapper[4754]: I0105 20:33:42.470238 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="cca29317-4bf0-40f2-aeff-1cbb68fb9cd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.14:5671: connect: connection refused" Jan 05 20:33:42 crc kubenswrapper[4754]: I0105 20:33:42.580293 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ad41aacf-9d0a-42e2-b3cf-de51001540e2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.15:5671: connect: connection refused" Jan 05 20:33:43 crc kubenswrapper[4754]: E0105 20:33:43.045172 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad58f356_3e52_422b_89ba_0db520cce910.slice/crio-21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad58f356_3e52_422b_89ba_0db520cce910.slice/crio-conmon-21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.347552 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.481234 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom\") pod \"ad58f356-3e52-422b-89ba-0db520cce910\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.481331 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsfhz\" (UniqueName: \"kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz\") pod \"ad58f356-3e52-422b-89ba-0db520cce910\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.481411 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle\") pod \"ad58f356-3e52-422b-89ba-0db520cce910\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.481555 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data\") pod \"ad58f356-3e52-422b-89ba-0db520cce910\" (UID: \"ad58f356-3e52-422b-89ba-0db520cce910\") " Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.487385 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ad58f356-3e52-422b-89ba-0db520cce910" (UID: "ad58f356-3e52-422b-89ba-0db520cce910"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.491763 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz" (OuterVolumeSpecName: "kube-api-access-vsfhz") pod "ad58f356-3e52-422b-89ba-0db520cce910" (UID: "ad58f356-3e52-422b-89ba-0db520cce910"). InnerVolumeSpecName "kube-api-access-vsfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.520710 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad58f356-3e52-422b-89ba-0db520cce910" (UID: "ad58f356-3e52-422b-89ba-0db520cce910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.556531 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data" (OuterVolumeSpecName: "config-data") pod "ad58f356-3e52-422b-89ba-0db520cce910" (UID: "ad58f356-3e52-422b-89ba-0db520cce910"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.584123 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.584182 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsfhz\" (UniqueName: \"kubernetes.io/projected/ad58f356-3e52-422b-89ba-0db520cce910-kube-api-access-vsfhz\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.584195 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.584204 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad58f356-3e52-422b-89ba-0db520cce910-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.676132 4754 generic.go:334] "Generic (PLEG): container finished" podID="ad58f356-3e52-422b-89ba-0db520cce910" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" exitCode=0 Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.676175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78955f8dfd-95rvt" event={"ID":"ad58f356-3e52-422b-89ba-0db520cce910","Type":"ContainerDied","Data":"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56"} Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.676202 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78955f8dfd-95rvt" event={"ID":"ad58f356-3e52-422b-89ba-0db520cce910","Type":"ContainerDied","Data":"36ac7aa52ad0383e4573f0ab9a81a6392ba5001700a411faa71dd7f5b085a2fe"} Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.676218 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78955f8dfd-95rvt" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.676224 4754 scope.go:117] "RemoveContainer" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.707754 4754 scope.go:117] "RemoveContainer" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" Jan 05 20:33:43 crc kubenswrapper[4754]: E0105 20:33:43.708462 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56\": container with ID starting with 21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56 not found: ID does not exist" containerID="21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.708495 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56"} err="failed to get container status \"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56\": rpc error: code = NotFound desc = could not find container \"21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56\": container with ID starting with 21001b28c026f444607e4f59e0e7c08a834424ca2203745ef337c1cf4bf48d56 not found: ID does not exist" Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.717145 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:33:43 crc kubenswrapper[4754]: I0105 20:33:43.734801 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-78955f8dfd-95rvt"] Jan 05 20:33:44 crc kubenswrapper[4754]: I0105 20:33:44.690029 4754 generic.go:334] "Generic (PLEG): container finished" podID="2b32de6d-d71d-4c68-9078-32e6fbdb6f37" containerID="65b730c4aa36d59540ec344235231163e28f5f2bacefed511ced75dd9f9d3131" exitCode=0 Jan 05 20:33:44 crc kubenswrapper[4754]: I0105 20:33:44.690143 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-48pbv" event={"ID":"2b32de6d-d71d-4c68-9078-32e6fbdb6f37","Type":"ContainerDied","Data":"65b730c4aa36d59540ec344235231163e28f5f2bacefed511ced75dd9f9d3131"} Jan 05 20:33:45 crc kubenswrapper[4754]: I0105 20:33:45.663756 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad58f356-3e52-422b-89ba-0db520cce910" path="/var/lib/kubelet/pods/ad58f356-3e52-422b-89ba-0db520cce910/volumes" Jan 05 20:33:45 crc kubenswrapper[4754]: I0105 20:33:45.708427 4754 generic.go:334] "Generic (PLEG): container finished" podID="fffef548-67ef-4afd-a644-9aaad154e735" containerID="a10770f1996008538f87e8ee2f8a0397d7aa05d4812404e774223834830a9e8b" exitCode=0 Jan 05 20:33:45 crc kubenswrapper[4754]: I0105 20:33:45.708678 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" event={"ID":"fffef548-67ef-4afd-a644-9aaad154e735","Type":"ContainerDied","Data":"a10770f1996008538f87e8ee2f8a0397d7aa05d4812404e774223834830a9e8b"} Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.188202 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.269411 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data\") pod \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.269470 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle\") pod \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.269649 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh6qn\" (UniqueName: \"kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn\") pod \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.269712 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts\") pod \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\" (UID: \"2b32de6d-d71d-4c68-9078-32e6fbdb6f37\") " Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.275766 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn" (OuterVolumeSpecName: "kube-api-access-wh6qn") pod "2b32de6d-d71d-4c68-9078-32e6fbdb6f37" (UID: "2b32de6d-d71d-4c68-9078-32e6fbdb6f37"). InnerVolumeSpecName "kube-api-access-wh6qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.290867 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts" (OuterVolumeSpecName: "scripts") pod "2b32de6d-d71d-4c68-9078-32e6fbdb6f37" (UID: "2b32de6d-d71d-4c68-9078-32e6fbdb6f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.310474 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data" (OuterVolumeSpecName: "config-data") pod "2b32de6d-d71d-4c68-9078-32e6fbdb6f37" (UID: "2b32de6d-d71d-4c68-9078-32e6fbdb6f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.315169 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b32de6d-d71d-4c68-9078-32e6fbdb6f37" (UID: "2b32de6d-d71d-4c68-9078-32e6fbdb6f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.373015 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.373047 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.373059 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh6qn\" (UniqueName: \"kubernetes.io/projected/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-kube-api-access-wh6qn\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.373067 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32de6d-d71d-4c68-9078-32e6fbdb6f37-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.723691 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-48pbv" event={"ID":"2b32de6d-d71d-4c68-9078-32e6fbdb6f37","Type":"ContainerDied","Data":"d76aabf08ba246578cbb56739bb4814a91cbf634248516d5b3288d59fbd0d191"} Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.723747 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d76aabf08ba246578cbb56739bb4814a91cbf634248516d5b3288d59fbd0d191" Jan 05 20:33:46 crc kubenswrapper[4754]: I0105 20:33:46.723822 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-48pbv" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.308185 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.402367 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key\") pod \"fffef548-67ef-4afd-a644-9aaad154e735\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.402568 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory\") pod \"fffef548-67ef-4afd-a644-9aaad154e735\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.402609 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfscj\" (UniqueName: \"kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj\") pod \"fffef548-67ef-4afd-a644-9aaad154e735\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.402826 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle\") pod \"fffef548-67ef-4afd-a644-9aaad154e735\" (UID: \"fffef548-67ef-4afd-a644-9aaad154e735\") " Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.407968 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj" (OuterVolumeSpecName: "kube-api-access-hfscj") pod "fffef548-67ef-4afd-a644-9aaad154e735" (UID: "fffef548-67ef-4afd-a644-9aaad154e735"). InnerVolumeSpecName "kube-api-access-hfscj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.411008 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fffef548-67ef-4afd-a644-9aaad154e735" (UID: "fffef548-67ef-4afd-a644-9aaad154e735"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.438172 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fffef548-67ef-4afd-a644-9aaad154e735" (UID: "fffef548-67ef-4afd-a644-9aaad154e735"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.446434 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory" (OuterVolumeSpecName: "inventory") pod "fffef548-67ef-4afd-a644-9aaad154e735" (UID: "fffef548-67ef-4afd-a644-9aaad154e735"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.506150 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.506183 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.506192 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfscj\" (UniqueName: \"kubernetes.io/projected/fffef548-67ef-4afd-a644-9aaad154e735-kube-api-access-hfscj\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.506204 4754 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fffef548-67ef-4afd-a644-9aaad154e735-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.740770 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" event={"ID":"fffef548-67ef-4afd-a644-9aaad154e735","Type":"ContainerDied","Data":"53acb91f3585d44f387ce9d728254719653b2f09763513d2c640243d24284b15"} Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.740815 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53acb91f3585d44f387ce9d728254719653b2f09763513d2c640243d24284b15" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.740891 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.846905 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn"] Jan 05 20:33:47 crc kubenswrapper[4754]: E0105 20:33:47.847724 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32de6d-d71d-4c68-9078-32e6fbdb6f37" containerName="aodh-db-sync" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.847754 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32de6d-d71d-4c68-9078-32e6fbdb6f37" containerName="aodh-db-sync" Jan 05 20:33:47 crc kubenswrapper[4754]: E0105 20:33:47.847773 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad58f356-3e52-422b-89ba-0db520cce910" containerName="heat-engine" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.847785 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad58f356-3e52-422b-89ba-0db520cce910" containerName="heat-engine" Jan 05 20:33:47 crc kubenswrapper[4754]: E0105 20:33:47.847809 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffef548-67ef-4afd-a644-9aaad154e735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.847824 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffef548-67ef-4afd-a644-9aaad154e735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.848226 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad58f356-3e52-422b-89ba-0db520cce910" containerName="heat-engine" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.848275 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffef548-67ef-4afd-a644-9aaad154e735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.848329 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b32de6d-d71d-4c68-9078-32e6fbdb6f37" containerName="aodh-db-sync" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.849662 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.856521 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.856959 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.857222 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.857818 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.879402 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn"] Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.916339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.916459 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:47 crc kubenswrapper[4754]: I0105 20:33:47.916556 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfpts\" (UniqueName: \"kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.019037 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.019201 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfpts\" (UniqueName: \"kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.019410 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.032519 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.036853 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.043757 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfpts\" (UniqueName: \"kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xq9xn\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.159733 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.160257 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-api" containerID="cri-o://688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9" gracePeriod=30 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.160329 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-notifier" containerID="cri-o://8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f" gracePeriod=30 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.160433 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-evaluator" containerID="cri-o://930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7" gracePeriod=30 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.160285 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-listener" containerID="cri-o://49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc" gracePeriod=30 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.177904 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.764402 4754 generic.go:334] "Generic (PLEG): container finished" podID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerID="930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7" exitCode=0 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.764448 4754 generic.go:334] "Generic (PLEG): container finished" podID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerID="688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9" exitCode=0 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.764470 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerDied","Data":"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7"} Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.764510 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerDied","Data":"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9"} Jan 05 20:33:48 crc kubenswrapper[4754]: W0105 20:33:48.947424 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf49e079e_8128_4d5c_843d_54c6d12df620.slice/crio-24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0 WatchSource:0}: Error finding container 24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0: Status 404 returned error can't find the container with id 24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0 Jan 05 20:33:48 crc kubenswrapper[4754]: I0105 20:33:48.952109 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn"] Jan 05 20:33:49 crc kubenswrapper[4754]: I0105 20:33:49.780375 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" event={"ID":"f49e079e-8128-4d5c-843d-54c6d12df620","Type":"ContainerStarted","Data":"24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0"} Jan 05 20:33:50 crc kubenswrapper[4754]: I0105 20:33:50.794314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" event={"ID":"f49e079e-8128-4d5c-843d-54c6d12df620","Type":"ContainerStarted","Data":"abd5aff86f4977418d06dc8dc04256ed0bba274b133991c76a20936ce7e3d22e"} Jan 05 20:33:50 crc kubenswrapper[4754]: I0105 20:33:50.817034 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" podStartSLOduration=3.037601354 podStartE2EDuration="3.817008078s" podCreationTimestamp="2026-01-05 20:33:47 +0000 UTC" firstStartedPulling="2026-01-05 20:33:48.95167499 +0000 UTC m=+1715.660858864" lastFinishedPulling="2026-01-05 20:33:49.731081714 +0000 UTC m=+1716.440265588" observedRunningTime="2026-01-05 20:33:50.814515163 +0000 UTC m=+1717.523699057" watchObservedRunningTime="2026-01-05 20:33:50.817008078 +0000 UTC m=+1717.526191972" Jan 05 20:33:51 crc kubenswrapper[4754]: I0105 20:33:51.588696 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:33:51 crc kubenswrapper[4754]: E0105 20:33:51.589186 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.467615 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.550959 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.581581 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.819424 4754 generic.go:334] "Generic (PLEG): container finished" podID="f49e079e-8128-4d5c-843d-54c6d12df620" containerID="abd5aff86f4977418d06dc8dc04256ed0bba274b133991c76a20936ce7e3d22e" exitCode=0 Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.819498 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" event={"ID":"f49e079e-8128-4d5c-843d-54c6d12df620","Type":"ContainerDied","Data":"abd5aff86f4977418d06dc8dc04256ed0bba274b133991c76a20936ce7e3d22e"} Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.823568 4754 generic.go:334] "Generic (PLEG): container finished" podID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerID="49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc" exitCode=0 Jan 05 20:33:52 crc kubenswrapper[4754]: I0105 20:33:52.823609 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerDied","Data":"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc"} Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.300360 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.481238 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.481304 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88qlq\" (UniqueName: \"kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.481357 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.481602 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.481964 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.482053 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs\") pod \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\" (UID: \"4bc52a8d-ecb8-4b19-8275-b739d23f7c43\") " Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.500575 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts" (OuterVolumeSpecName: "scripts") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.505563 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq" (OuterVolumeSpecName: "kube-api-access-88qlq") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "kube-api-access-88qlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.573433 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.585727 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.585769 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88qlq\" (UniqueName: \"kubernetes.io/projected/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-kube-api-access-88qlq\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.585788 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.602913 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.625535 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data" (OuterVolumeSpecName: "config-data") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.637363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc52a8d-ecb8-4b19-8275-b739d23f7c43" (UID: "4bc52a8d-ecb8-4b19-8275-b739d23f7c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.688787 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.688826 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.688838 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc52a8d-ecb8-4b19-8275-b739d23f7c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.837548 4754 generic.go:334] "Generic (PLEG): container finished" podID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerID="8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f" exitCode=0 Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.837635 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.837634 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerDied","Data":"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f"} Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.837690 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4bc52a8d-ecb8-4b19-8275-b739d23f7c43","Type":"ContainerDied","Data":"2408eefbe1c039863d9d322fe18d70cdeca0569bcbd35af944c8bd27588b4182"} Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.837712 4754 scope.go:117] "RemoveContainer" containerID="49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.878654 4754 scope.go:117] "RemoveContainer" containerID="8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.885021 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.910026 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924048 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:53 crc kubenswrapper[4754]: E0105 20:33:53.924542 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-listener" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924554 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-listener" Jan 05 20:33:53 crc kubenswrapper[4754]: E0105 20:33:53.924567 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-notifier" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924573 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-notifier" Jan 05 20:33:53 crc kubenswrapper[4754]: E0105 20:33:53.924613 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-api" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924619 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-api" Jan 05 20:33:53 crc kubenswrapper[4754]: E0105 20:33:53.924626 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-evaluator" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924632 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-evaluator" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924855 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-listener" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924864 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-api" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924876 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-notifier" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.924897 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" containerName="aodh-evaluator" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.927109 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.935149 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.960973 4754 scope.go:117] "RemoveContainer" containerID="930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.965497 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.965780 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.966019 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.966173 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.966541 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq9pz" Jan 05 20:33:53 crc kubenswrapper[4754]: I0105 20:33:53.997553 4754 scope.go:117] "RemoveContainer" containerID="688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.050214 4754 scope.go:117] "RemoveContainer" containerID="49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc" Jan 05 20:33:54 crc kubenswrapper[4754]: E0105 20:33:54.051027 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc\": container with ID starting with 49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc not found: ID does not exist" containerID="49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051058 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc"} err="failed to get container status \"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc\": rpc error: code = NotFound desc = could not find container \"49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc\": container with ID starting with 49d92a3a7d6f14b29736eabee5285e9f50951e020dd080b1f525c989c147d5bc not found: ID does not exist" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051086 4754 scope.go:117] "RemoveContainer" containerID="8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f" Jan 05 20:33:54 crc kubenswrapper[4754]: E0105 20:33:54.051397 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f\": container with ID starting with 8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f not found: ID does not exist" containerID="8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051419 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f"} err="failed to get container status \"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f\": rpc error: code = NotFound desc = could not find container \"8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f\": container with ID starting with 8dde50274cf5ef44f7eac7c1cab4e66e5a8dfb765a5c072f69370e255c686c5f not found: ID does not exist" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051437 4754 scope.go:117] "RemoveContainer" containerID="930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7" Jan 05 20:33:54 crc kubenswrapper[4754]: E0105 20:33:54.051889 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7\": container with ID starting with 930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7 not found: ID does not exist" containerID="930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051912 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7"} err="failed to get container status \"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7\": rpc error: code = NotFound desc = could not find container \"930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7\": container with ID starting with 930f4358c981d704b636d8541e90ab1ed3311be36fdbdada64858c06452062f7 not found: ID does not exist" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.051927 4754 scope.go:117] "RemoveContainer" containerID="688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9" Jan 05 20:33:54 crc kubenswrapper[4754]: E0105 20:33:54.052143 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9\": container with ID starting with 688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9 not found: ID does not exist" containerID="688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.052161 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9"} err="failed to get container status \"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9\": rpc error: code = NotFound desc = could not find container \"688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9\": container with ID starting with 688b67dfc8e2f261e0619603fb90dcd07166c19636141c334541ae6fe35066e9 not found: ID does not exist" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.098907 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-public-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.098969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-scripts\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.099033 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-internal-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.099088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-combined-ca-bundle\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.099118 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ctlw\" (UniqueName: \"kubernetes.io/projected/959170c4-f107-47da-baf8-7e3e49084424-kube-api-access-4ctlw\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.099161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-config-data\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208046 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-scripts\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208142 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-internal-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208203 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-combined-ca-bundle\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208233 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ctlw\" (UniqueName: \"kubernetes.io/projected/959170c4-f107-47da-baf8-7e3e49084424-kube-api-access-4ctlw\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-config-data\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.208357 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-public-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.216490 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-config-data\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.222145 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-internal-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.229720 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-scripts\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.233810 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-public-tls-certs\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.235218 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959170c4-f107-47da-baf8-7e3e49084424-combined-ca-bundle\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.269671 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ctlw\" (UniqueName: \"kubernetes.io/projected/959170c4-f107-47da-baf8-7e3e49084424-kube-api-access-4ctlw\") pod \"aodh-0\" (UID: \"959170c4-f107-47da-baf8-7e3e49084424\") " pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.287669 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.484987 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.620936 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfpts\" (UniqueName: \"kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts\") pod \"f49e079e-8128-4d5c-843d-54c6d12df620\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.621086 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key\") pod \"f49e079e-8128-4d5c-843d-54c6d12df620\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.621651 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory\") pod \"f49e079e-8128-4d5c-843d-54c6d12df620\" (UID: \"f49e079e-8128-4d5c-843d-54c6d12df620\") " Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.636575 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts" (OuterVolumeSpecName: "kube-api-access-zfpts") pod "f49e079e-8128-4d5c-843d-54c6d12df620" (UID: "f49e079e-8128-4d5c-843d-54c6d12df620"). InnerVolumeSpecName "kube-api-access-zfpts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.670320 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory" (OuterVolumeSpecName: "inventory") pod "f49e079e-8128-4d5c-843d-54c6d12df620" (UID: "f49e079e-8128-4d5c-843d-54c6d12df620"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.675444 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f49e079e-8128-4d5c-843d-54c6d12df620" (UID: "f49e079e-8128-4d5c-843d-54c6d12df620"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.725810 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.726086 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49e079e-8128-4d5c-843d-54c6d12df620-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.726284 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfpts\" (UniqueName: \"kubernetes.io/projected/f49e079e-8128-4d5c-843d-54c6d12df620-kube-api-access-zfpts\") on node \"crc\" DevicePath \"\"" Jan 05 20:33:54 crc kubenswrapper[4754]: W0105 20:33:54.842782 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod959170c4_f107_47da_baf8_7e3e49084424.slice/crio-480e1ba0f05cd2248acdb5b72c2fc54f8727548667d5d9fff0318548cac46191 WatchSource:0}: Error finding container 480e1ba0f05cd2248acdb5b72c2fc54f8727548667d5d9fff0318548cac46191: Status 404 returned error can't find the container with id 480e1ba0f05cd2248acdb5b72c2fc54f8727548667d5d9fff0318548cac46191 Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.848143 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.850486 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.851268 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xq9xn" event={"ID":"f49e079e-8128-4d5c-843d-54c6d12df620","Type":"ContainerDied","Data":"24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0"} Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.851361 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cfc94620818148624282ded536e479b988408bb7d2d9438c5ad36c4fd885b0" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.943912 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw"] Jan 05 20:33:54 crc kubenswrapper[4754]: E0105 20:33:54.944834 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49e079e-8128-4d5c-843d-54c6d12df620" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.944856 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49e079e-8128-4d5c-843d-54c6d12df620" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.945366 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49e079e-8128-4d5c-843d-54c6d12df620" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.946651 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.949604 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.949842 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.949948 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.950101 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:33:54 crc kubenswrapper[4754]: I0105 20:33:54.984674 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw"] Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.134540 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwszc\" (UniqueName: \"kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.134671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.134969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.135030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.237587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwszc\" (UniqueName: \"kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.238089 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.238221 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.238254 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.243465 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.244636 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.248146 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.261686 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwszc\" (UniqueName: \"kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.270441 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.608130 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc52a8d-ecb8-4b19-8275-b739d23f7c43" path="/var/lib/kubelet/pods/4bc52a8d-ecb8-4b19-8275-b739d23f7c43/volumes" Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.818978 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw"] Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.867854 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"959170c4-f107-47da-baf8-7e3e49084424","Type":"ContainerStarted","Data":"dfd906eb72af0d2acf3f0a1c9978ee7100ca802241bd5f7aa4ddda08b726896e"} Jan 05 20:33:55 crc kubenswrapper[4754]: I0105 20:33:55.867914 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"959170c4-f107-47da-baf8-7e3e49084424","Type":"ContainerStarted","Data":"480e1ba0f05cd2248acdb5b72c2fc54f8727548667d5d9fff0318548cac46191"} Jan 05 20:33:56 crc kubenswrapper[4754]: W0105 20:33:56.039479 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0bfa36_b48b_4bba_b358_22c0a1001f5b.slice/crio-63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3 WatchSource:0}: Error finding container 63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3: Status 404 returned error can't find the container with id 63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3 Jan 05 20:33:56 crc kubenswrapper[4754]: I0105 20:33:56.899955 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"959170c4-f107-47da-baf8-7e3e49084424","Type":"ContainerStarted","Data":"86e4f41b0e8140ae37e6e3368c16fc374ea216bff4b69121847d5709e4856be3"} Jan 05 20:33:56 crc kubenswrapper[4754]: I0105 20:33:56.903442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" event={"ID":"8c0bfa36-b48b-4bba-b358-22c0a1001f5b","Type":"ContainerStarted","Data":"63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3"} Jan 05 20:33:57 crc kubenswrapper[4754]: I0105 20:33:57.537664 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" containerID="cri-o://baf7c634a969aa0a63f9c70b6ee70cbeb4db04e11becc8c7030f9850939268ee" gracePeriod=604796 Jan 05 20:33:57 crc kubenswrapper[4754]: I0105 20:33:57.538824 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 05 20:33:57 crc kubenswrapper[4754]: I0105 20:33:57.919776 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" event={"ID":"8c0bfa36-b48b-4bba-b358-22c0a1001f5b","Type":"ContainerStarted","Data":"63e7cefea97aa4e6b10206f5e26a3c540317844c9fc239507dfac79ad8ee3fea"} Jan 05 20:33:57 crc kubenswrapper[4754]: I0105 20:33:57.922149 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"959170c4-f107-47da-baf8-7e3e49084424","Type":"ContainerStarted","Data":"0593bdc8006805c2131054cee27e6ed6520e3728e0de60cacd4dd9a477dca4a2"} Jan 05 20:33:57 crc kubenswrapper[4754]: I0105 20:33:57.944834 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" podStartSLOduration=3.365412403 podStartE2EDuration="3.944817896s" podCreationTimestamp="2026-01-05 20:33:54 +0000 UTC" firstStartedPulling="2026-01-05 20:33:56.042339742 +0000 UTC m=+1722.751523616" lastFinishedPulling="2026-01-05 20:33:56.621745235 +0000 UTC m=+1723.330929109" observedRunningTime="2026-01-05 20:33:57.94114891 +0000 UTC m=+1724.650332794" watchObservedRunningTime="2026-01-05 20:33:57.944817896 +0000 UTC m=+1724.654001770" Jan 05 20:33:58 crc kubenswrapper[4754]: I0105 20:33:58.936071 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"959170c4-f107-47da-baf8-7e3e49084424","Type":"ContainerStarted","Data":"9dd5aab1a21ccb17bb2dc67cf3d3122cfdf9a902a5d32313caaa595e1adf2d56"} Jan 05 20:33:58 crc kubenswrapper[4754]: I0105 20:33:58.970060 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.2895764659999998 podStartE2EDuration="5.970040526s" podCreationTimestamp="2026-01-05 20:33:53 +0000 UTC" firstStartedPulling="2026-01-05 20:33:54.849240194 +0000 UTC m=+1721.558424068" lastFinishedPulling="2026-01-05 20:33:58.529704244 +0000 UTC m=+1725.238888128" observedRunningTime="2026-01-05 20:33:58.957055975 +0000 UTC m=+1725.666239849" watchObservedRunningTime="2026-01-05 20:33:58.970040526 +0000 UTC m=+1725.679224400" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.023577 4754 generic.go:334] "Generic (PLEG): container finished" podID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerID="baf7c634a969aa0a63f9c70b6ee70cbeb4db04e11becc8c7030f9850939268ee" exitCode=0 Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.023652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerDied","Data":"baf7c634a969aa0a63f9c70b6ee70cbeb4db04e11becc8c7030f9850939268ee"} Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.273885 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420617 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420703 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420725 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420755 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420801 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8trsj\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420841 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420890 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420965 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.420981 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.421019 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info\") pod \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\" (UID: \"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58\") " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.421897 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.422383 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.422717 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.427479 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.427736 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.427966 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj" (OuterVolumeSpecName: "kube-api-access-8trsj") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "kube-api-access-8trsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.428982 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info" (OuterVolumeSpecName: "pod-info") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.439208 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06" (OuterVolumeSpecName: "persistence") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "pvc-88986084-b876-4fbb-b01c-4b8d50151b06". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.449474 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data" (OuterVolumeSpecName: "config-data") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.475113 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf" (OuterVolumeSpecName: "server-conf") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524074 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524304 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524380 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524475 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524551 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") on node \"crc\" " Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524610 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524665 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524721 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524785 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8trsj\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-kube-api-access-8trsj\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.524843 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.551209 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.551538 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-88986084-b876-4fbb-b01c-4b8d50151b06" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06") on node "crc" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.574476 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" (UID: "efbb8662-26f9-44ac-aba4-ecc1fc6a4d58"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.627119 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:04 crc kubenswrapper[4754]: I0105 20:34:04.627149 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") on node \"crc\" DevicePath \"\"" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.036658 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"efbb8662-26f9-44ac-aba4-ecc1fc6a4d58","Type":"ContainerDied","Data":"af88a6a54804b8611827b356f77e7d6807fbbb85701ec441949be10b8a835730"} Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.036705 4754 scope.go:117] "RemoveContainer" containerID="baf7c634a969aa0a63f9c70b6ee70cbeb4db04e11becc8c7030f9850939268ee" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.036842 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.073556 4754 scope.go:117] "RemoveContainer" containerID="22582c6983c67f585a15f88e87e0eabd3d51825cd3e01611a2a4963a33af5825" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.079836 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.096967 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.120125 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:34:05 crc kubenswrapper[4754]: E0105 20:34:05.120627 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="setup-container" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.120643 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="setup-container" Jan 05 20:34:05 crc kubenswrapper[4754]: E0105 20:34:05.120667 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.120673 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.120911 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" containerName="rabbitmq" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.122138 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.157862 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239378 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239426 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fb3d0a2-68b4-4224-8eeb-c9113f079684-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239466 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239602 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239668 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fb3d0a2-68b4-4224-8eeb-c9113f079684-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239804 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.239940 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.240059 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-config-data\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.240084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.240182 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.240237 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5mb5\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-kube-api-access-q5mb5\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342058 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342171 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-config-data\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342190 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342238 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342280 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5mb5\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-kube-api-access-q5mb5\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342340 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fb3d0a2-68b4-4224-8eeb-c9113f079684-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342365 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342404 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.342440 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fb3d0a2-68b4-4224-8eeb-c9113f079684-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.344260 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.344468 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.344582 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.345239 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-config-data\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.345527 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fb3d0a2-68b4-4224-8eeb-c9113f079684-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.348890 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.349329 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fb3d0a2-68b4-4224-8eeb-c9113f079684-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.349458 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.349492 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d6460ae131bef567a721ac9ced0be549876bcabc22a363ca8bce7a527cb91439/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.353050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.361837 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fb3d0a2-68b4-4224-8eeb-c9113f079684-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.370433 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5mb5\" (UniqueName: \"kubernetes.io/projected/4fb3d0a2-68b4-4224-8eeb-c9113f079684-kube-api-access-q5mb5\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.419919 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-88986084-b876-4fbb-b01c-4b8d50151b06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-88986084-b876-4fbb-b01c-4b8d50151b06\") pod \"rabbitmq-server-1\" (UID: \"4fb3d0a2-68b4-4224-8eeb-c9113f079684\") " pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.463210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.592954 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:34:05 crc kubenswrapper[4754]: E0105 20:34:05.593561 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.679673 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbb8662-26f9-44ac-aba4-ecc1fc6a4d58" path="/var/lib/kubelet/pods/efbb8662-26f9-44ac-aba4-ecc1fc6a4d58/volumes" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.798754 4754 scope.go:117] "RemoveContainer" containerID="4a49a1ce2215b4b42517c48b3b2f14c617dc624019d8df4ca8197b7a297bc47e" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.874122 4754 scope.go:117] "RemoveContainer" containerID="659c676cb6539d7b1f9c0bd1508b3423e26ff4a1e9b2746c68673bbf2d09a935" Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.932416 4754 scope.go:117] "RemoveContainer" containerID="1a5fe86202b5f228970f16f8a69574c04c19c57c1ba932d6380d4760d82170e4" Jan 05 20:34:05 crc kubenswrapper[4754]: W0105 20:34:05.974925 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fb3d0a2_68b4_4224_8eeb_c9113f079684.slice/crio-6785cd4f07f4d25737025be17523b8f59b87cfe15e150c97ae0e6466f77c7772 WatchSource:0}: Error finding container 6785cd4f07f4d25737025be17523b8f59b87cfe15e150c97ae0e6466f77c7772: Status 404 returned error can't find the container with id 6785cd4f07f4d25737025be17523b8f59b87cfe15e150c97ae0e6466f77c7772 Jan 05 20:34:05 crc kubenswrapper[4754]: I0105 20:34:05.975285 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 05 20:34:06 crc kubenswrapper[4754]: I0105 20:34:06.055643 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4fb3d0a2-68b4-4224-8eeb-c9113f079684","Type":"ContainerStarted","Data":"6785cd4f07f4d25737025be17523b8f59b87cfe15e150c97ae0e6466f77c7772"} Jan 05 20:34:06 crc kubenswrapper[4754]: I0105 20:34:06.097925 4754 scope.go:117] "RemoveContainer" containerID="7290e7e5847d9ee55afbfde61ff5b35b1e3d5390ffca00f2e1f5090f6ffb1751" Jan 05 20:34:06 crc kubenswrapper[4754]: I0105 20:34:06.134224 4754 scope.go:117] "RemoveContainer" containerID="a87dc4f9e8f81b266f0482049a791b59502669a8ce1385fb1716e709e1265e7f" Jan 05 20:34:08 crc kubenswrapper[4754]: I0105 20:34:08.090639 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4fb3d0a2-68b4-4224-8eeb-c9113f079684","Type":"ContainerStarted","Data":"61b626ccf8dc9311fc03c4d22a7aee85d7d627d4741647746e154bca673928d3"} Jan 05 20:34:19 crc kubenswrapper[4754]: I0105 20:34:19.588841 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:34:19 crc kubenswrapper[4754]: E0105 20:34:19.589733 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:34:30 crc kubenswrapper[4754]: I0105 20:34:30.588869 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:34:30 crc kubenswrapper[4754]: E0105 20:34:30.589549 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:34:39 crc kubenswrapper[4754]: I0105 20:34:39.868630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:34:39 crc kubenswrapper[4754]: I0105 20:34:39.873993 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:39 crc kubenswrapper[4754]: I0105 20:34:39.889722 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.037385 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.037796 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.038080 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfwjt\" (UniqueName: \"kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.140892 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.141372 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.141573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfwjt\" (UniqueName: \"kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.141614 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.142735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.175477 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfwjt\" (UniqueName: \"kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt\") pod \"certified-operators-jj62m\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.215514 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:40 crc kubenswrapper[4754]: I0105 20:34:40.718189 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:34:41 crc kubenswrapper[4754]: I0105 20:34:41.583483 4754 generic.go:334] "Generic (PLEG): container finished" podID="3480441f-0f65-43f5-93aa-798a5011e71d" containerID="dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884" exitCode=0 Jan 05 20:34:41 crc kubenswrapper[4754]: I0105 20:34:41.583560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerDied","Data":"dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884"} Jan 05 20:34:41 crc kubenswrapper[4754]: I0105 20:34:41.584721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerStarted","Data":"8028f8303a615a39afd3e17b6b6af708c2727997297ea2c80da9720a57546d82"} Jan 05 20:34:41 crc kubenswrapper[4754]: I0105 20:34:41.586573 4754 generic.go:334] "Generic (PLEG): container finished" podID="4fb3d0a2-68b4-4224-8eeb-c9113f079684" containerID="61b626ccf8dc9311fc03c4d22a7aee85d7d627d4741647746e154bca673928d3" exitCode=0 Jan 05 20:34:41 crc kubenswrapper[4754]: I0105 20:34:41.586607 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4fb3d0a2-68b4-4224-8eeb-c9113f079684","Type":"ContainerDied","Data":"61b626ccf8dc9311fc03c4d22a7aee85d7d627d4741647746e154bca673928d3"} Jan 05 20:34:42 crc kubenswrapper[4754]: I0105 20:34:42.589197 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:34:42 crc kubenswrapper[4754]: E0105 20:34:42.589977 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:34:42 crc kubenswrapper[4754]: I0105 20:34:42.604576 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4fb3d0a2-68b4-4224-8eeb-c9113f079684","Type":"ContainerStarted","Data":"5bac4044ea1696061c67f602200997019d344e20c4be071a711e3252b9045d0e"} Jan 05 20:34:42 crc kubenswrapper[4754]: I0105 20:34:42.604829 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 05 20:34:42 crc kubenswrapper[4754]: I0105 20:34:42.656415 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.656386637 podStartE2EDuration="37.656386637s" podCreationTimestamp="2026-01-05 20:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:34:42.642075692 +0000 UTC m=+1769.351259566" watchObservedRunningTime="2026-01-05 20:34:42.656386637 +0000 UTC m=+1769.365570541" Jan 05 20:34:44 crc kubenswrapper[4754]: I0105 20:34:44.626893 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerStarted","Data":"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed"} Jan 05 20:34:45 crc kubenswrapper[4754]: I0105 20:34:45.641192 4754 generic.go:334] "Generic (PLEG): container finished" podID="3480441f-0f65-43f5-93aa-798a5011e71d" containerID="140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed" exitCode=0 Jan 05 20:34:45 crc kubenswrapper[4754]: I0105 20:34:45.641338 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerDied","Data":"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed"} Jan 05 20:34:47 crc kubenswrapper[4754]: I0105 20:34:47.668849 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerStarted","Data":"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d"} Jan 05 20:34:47 crc kubenswrapper[4754]: I0105 20:34:47.703389 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jj62m" podStartSLOduration=3.533505531 podStartE2EDuration="8.703369628s" podCreationTimestamp="2026-01-05 20:34:39 +0000 UTC" firstStartedPulling="2026-01-05 20:34:41.585413506 +0000 UTC m=+1768.294597380" lastFinishedPulling="2026-01-05 20:34:46.755277563 +0000 UTC m=+1773.464461477" observedRunningTime="2026-01-05 20:34:47.696906408 +0000 UTC m=+1774.406090322" watchObservedRunningTime="2026-01-05 20:34:47.703369628 +0000 UTC m=+1774.412553502" Jan 05 20:34:50 crc kubenswrapper[4754]: I0105 20:34:50.216074 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:50 crc kubenswrapper[4754]: I0105 20:34:50.216906 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:34:51 crc kubenswrapper[4754]: I0105 20:34:51.315651 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jj62m" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="registry-server" probeResult="failure" output=< Jan 05 20:34:51 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:34:51 crc kubenswrapper[4754]: > Jan 05 20:34:55 crc kubenswrapper[4754]: I0105 20:34:55.466471 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 05 20:34:55 crc kubenswrapper[4754]: I0105 20:34:55.534467 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:34:57 crc kubenswrapper[4754]: I0105 20:34:57.588360 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:34:57 crc kubenswrapper[4754]: E0105 20:34:57.589249 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:35:00 crc kubenswrapper[4754]: I0105 20:35:00.136252 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="rabbitmq" containerID="cri-o://6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872" gracePeriod=604796 Jan 05 20:35:00 crc kubenswrapper[4754]: I0105 20:35:00.267008 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:35:00 crc kubenswrapper[4754]: I0105 20:35:00.317715 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:35:00 crc kubenswrapper[4754]: I0105 20:35:00.504141 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:35:01 crc kubenswrapper[4754]: I0105 20:35:01.880096 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jj62m" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="registry-server" containerID="cri-o://c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d" gracePeriod=2 Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.482417 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.643023 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities\") pod \"3480441f-0f65-43f5-93aa-798a5011e71d\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.643217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content\") pod \"3480441f-0f65-43f5-93aa-798a5011e71d\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.643262 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfwjt\" (UniqueName: \"kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt\") pod \"3480441f-0f65-43f5-93aa-798a5011e71d\" (UID: \"3480441f-0f65-43f5-93aa-798a5011e71d\") " Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.643771 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities" (OuterVolumeSpecName: "utilities") pod "3480441f-0f65-43f5-93aa-798a5011e71d" (UID: "3480441f-0f65-43f5-93aa-798a5011e71d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.644006 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.648261 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt" (OuterVolumeSpecName: "kube-api-access-dfwjt") pod "3480441f-0f65-43f5-93aa-798a5011e71d" (UID: "3480441f-0f65-43f5-93aa-798a5011e71d"). InnerVolumeSpecName "kube-api-access-dfwjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.711671 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3480441f-0f65-43f5-93aa-798a5011e71d" (UID: "3480441f-0f65-43f5-93aa-798a5011e71d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.746499 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3480441f-0f65-43f5-93aa-798a5011e71d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.746533 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfwjt\" (UniqueName: \"kubernetes.io/projected/3480441f-0f65-43f5-93aa-798a5011e71d-kube-api-access-dfwjt\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.891915 4754 generic.go:334] "Generic (PLEG): container finished" podID="3480441f-0f65-43f5-93aa-798a5011e71d" containerID="c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d" exitCode=0 Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.891961 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerDied","Data":"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d"} Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.891998 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jj62m" event={"ID":"3480441f-0f65-43f5-93aa-798a5011e71d","Type":"ContainerDied","Data":"8028f8303a615a39afd3e17b6b6af708c2727997297ea2c80da9720a57546d82"} Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.892016 4754 scope.go:117] "RemoveContainer" containerID="c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.892120 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jj62m" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.930505 4754 scope.go:117] "RemoveContainer" containerID="140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.937456 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.973006 4754 scope.go:117] "RemoveContainer" containerID="dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884" Jan 05 20:35:02 crc kubenswrapper[4754]: I0105 20:35:02.976248 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jj62m"] Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.022405 4754 scope.go:117] "RemoveContainer" containerID="c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d" Jan 05 20:35:03 crc kubenswrapper[4754]: E0105 20:35:03.022962 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d\": container with ID starting with c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d not found: ID does not exist" containerID="c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.023034 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d"} err="failed to get container status \"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d\": rpc error: code = NotFound desc = could not find container \"c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d\": container with ID starting with c2219f39b5e77a4f3959c43278190db825324aba3cb728eb98c692f2338d1c9d not found: ID does not exist" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.023068 4754 scope.go:117] "RemoveContainer" containerID="140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed" Jan 05 20:35:03 crc kubenswrapper[4754]: E0105 20:35:03.023402 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed\": container with ID starting with 140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed not found: ID does not exist" containerID="140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.023453 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed"} err="failed to get container status \"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed\": rpc error: code = NotFound desc = could not find container \"140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed\": container with ID starting with 140afb8e1500d083e3157d4450ecdc0165ddc8c2175caa6c123434d85a425fed not found: ID does not exist" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.023482 4754 scope.go:117] "RemoveContainer" containerID="dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884" Jan 05 20:35:03 crc kubenswrapper[4754]: E0105 20:35:03.023811 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884\": container with ID starting with dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884 not found: ID does not exist" containerID="dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.023837 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884"} err="failed to get container status \"dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884\": rpc error: code = NotFound desc = could not find container \"dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884\": container with ID starting with dfbaeedf6d1dc3e6c24bd65f081611134064145f6218d349913eb17e429ff884 not found: ID does not exist" Jan 05 20:35:03 crc kubenswrapper[4754]: I0105 20:35:03.604817 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" path="/var/lib/kubelet/pods/3480441f-0f65-43f5-93aa-798a5011e71d/volumes" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.389816 4754 scope.go:117] "RemoveContainer" containerID="40f0c71f38136a569a074d31131ae837769b6614ed31098d2001a5fa7a9aea06" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.861213 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.940903 4754 generic.go:334] "Generic (PLEG): container finished" podID="909351bf-3608-40e6-9f93-bffa1ed74945" containerID="6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872" exitCode=0 Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.940953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerDied","Data":"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872"} Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.940989 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"909351bf-3608-40e6-9f93-bffa1ed74945","Type":"ContainerDied","Data":"6fb20690086c22ffc422e7cea87d901e41e2a6385a933e90ef1fbb724ba6a7ad"} Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.940988 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.941008 4754 scope.go:117] "RemoveContainer" containerID="6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.957766 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958265 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958324 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958415 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958581 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958601 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt7t6\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958624 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958654 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958678 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958698 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.958720 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data\") pod \"909351bf-3608-40e6-9f93-bffa1ed74945\" (UID: \"909351bf-3608-40e6-9f93-bffa1ed74945\") " Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.959041 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.959425 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.960233 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.961051 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.976316 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.977138 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6" (OuterVolumeSpecName: "kube-api-access-gt7t6") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "kube-api-access-gt7t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.978601 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:35:06 crc kubenswrapper[4754]: I0105 20:35:06.978700 4754 scope.go:117] "RemoveContainer" containerID="3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:06.984795 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info" (OuterVolumeSpecName: "pod-info") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.030887 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606" (OuterVolumeSpecName: "persistence") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.036157 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data" (OuterVolumeSpecName: "config-data") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063375 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt7t6\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-kube-api-access-gt7t6\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063432 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/909351bf-3608-40e6-9f93-bffa1ed74945-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063445 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/909351bf-3608-40e6-9f93-bffa1ed74945-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063456 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063470 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063482 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063497 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.063538 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") on node \"crc\" " Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.072221 4754 scope.go:117] "RemoveContainer" containerID="6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.072764 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872\": container with ID starting with 6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872 not found: ID does not exist" containerID="6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.072795 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872"} err="failed to get container status \"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872\": rpc error: code = NotFound desc = could not find container \"6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872\": container with ID starting with 6d0c998419d882ab7c93d200d484f1df56fa3233f2dbfda4fa79621e1471f872 not found: ID does not exist" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.072818 4754 scope.go:117] "RemoveContainer" containerID="3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.073324 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602\": container with ID starting with 3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602 not found: ID does not exist" containerID="3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.073364 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602"} err="failed to get container status \"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602\": rpc error: code = NotFound desc = could not find container \"3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602\": container with ID starting with 3aef396182a6b55f6003bca4aaf377227a2c801a262a24daa8913c3e1966b602 not found: ID does not exist" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.084521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf" (OuterVolumeSpecName: "server-conf") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.121784 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.122076 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606") on node "crc" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.158546 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "909351bf-3608-40e6-9f93-bffa1ed74945" (UID: "909351bf-3608-40e6-9f93-bffa1ed74945"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.166349 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.166492 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/909351bf-3608-40e6-9f93-bffa1ed74945-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.166553 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/909351bf-3608-40e6-9f93-bffa1ed74945-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.278105 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.290385 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.310518 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.310999 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="setup-container" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311015 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="setup-container" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.311033 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="extract-content" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311040 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="extract-content" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.311056 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="rabbitmq" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311062 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="rabbitmq" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.311083 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="extract-utilities" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311089 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="extract-utilities" Jan 05 20:35:07 crc kubenswrapper[4754]: E0105 20:35:07.311109 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="registry-server" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311115 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="registry-server" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311390 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3480441f-0f65-43f5-93aa-798a5011e71d" containerName="registry-server" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.311421 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" containerName="rabbitmq" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.313019 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.332478 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473110 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473217 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473255 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473344 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d607fcf-186e-407f-b95a-5a6b9ecad255-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473426 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473448 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp6k\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-kube-api-access-6kp6k\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d607fcf-186e-407f-b95a-5a6b9ecad255-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473532 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.473560 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.575907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.575973 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576035 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp6k\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-kube-api-access-6kp6k\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d607fcf-186e-407f-b95a-5a6b9ecad255-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576154 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576344 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576383 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576452 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d607fcf-186e-407f-b95a-5a6b9ecad255-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.576538 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.577548 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.577730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.577959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.578170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d607fcf-186e-407f-b95a-5a6b9ecad255-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.579998 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d607fcf-186e-407f-b95a-5a6b9ecad255-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.580657 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.582463 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.584433 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d607fcf-186e-407f-b95a-5a6b9ecad255-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.587345 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.587383 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cc29d0fd818fb70d50c682ea0203e481751a2a06fff7877ad146f175aef014f4/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.598343 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp6k\" (UniqueName: \"kubernetes.io/projected/1d607fcf-186e-407f-b95a-5a6b9ecad255-kube-api-access-6kp6k\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.605523 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909351bf-3608-40e6-9f93-bffa1ed74945" path="/var/lib/kubelet/pods/909351bf-3608-40e6-9f93-bffa1ed74945/volumes" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.689181 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8af03a1a-c755-48d5-8d0d-6a6cae0c9606\") pod \"rabbitmq-server-0\" (UID: \"1d607fcf-186e-407f-b95a-5a6b9ecad255\") " pod="openstack/rabbitmq-server-0" Jan 05 20:35:07 crc kubenswrapper[4754]: I0105 20:35:07.930483 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 20:35:08 crc kubenswrapper[4754]: I0105 20:35:08.477474 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 20:35:08 crc kubenswrapper[4754]: I0105 20:35:08.972564 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d607fcf-186e-407f-b95a-5a6b9ecad255","Type":"ContainerStarted","Data":"7cd39565a524b498e80110d58a490a0e51391d500fab5f95dc07449e098832e8"} Jan 05 20:35:11 crc kubenswrapper[4754]: I0105 20:35:11.000870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d607fcf-186e-407f-b95a-5a6b9ecad255","Type":"ContainerStarted","Data":"ceaa968cb0dc3fcf01612c5457703b80eba9419c9da30ba93e1434e65fee0eb3"} Jan 05 20:35:11 crc kubenswrapper[4754]: I0105 20:35:11.589792 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:35:11 crc kubenswrapper[4754]: E0105 20:35:11.590578 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:35:26 crc kubenswrapper[4754]: I0105 20:35:26.589826 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:35:26 crc kubenswrapper[4754]: E0105 20:35:26.592263 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:35:41 crc kubenswrapper[4754]: I0105 20:35:41.588318 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:35:41 crc kubenswrapper[4754]: E0105 20:35:41.589260 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:35:44 crc kubenswrapper[4754]: I0105 20:35:44.465238 4754 generic.go:334] "Generic (PLEG): container finished" podID="1d607fcf-186e-407f-b95a-5a6b9ecad255" containerID="ceaa968cb0dc3fcf01612c5457703b80eba9419c9da30ba93e1434e65fee0eb3" exitCode=0 Jan 05 20:35:44 crc kubenswrapper[4754]: I0105 20:35:44.465341 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d607fcf-186e-407f-b95a-5a6b9ecad255","Type":"ContainerDied","Data":"ceaa968cb0dc3fcf01612c5457703b80eba9419c9da30ba93e1434e65fee0eb3"} Jan 05 20:35:45 crc kubenswrapper[4754]: I0105 20:35:45.484935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d607fcf-186e-407f-b95a-5a6b9ecad255","Type":"ContainerStarted","Data":"dac0ab857c55bf86d7743dbb5ec744d9f25f2605b04cbc66a943ee384419fded"} Jan 05 20:35:45 crc kubenswrapper[4754]: I0105 20:35:45.485398 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 20:35:45 crc kubenswrapper[4754]: I0105 20:35:45.511284 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.511263198 podStartE2EDuration="38.511263198s" podCreationTimestamp="2026-01-05 20:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 20:35:45.503387341 +0000 UTC m=+1832.212571215" watchObservedRunningTime="2026-01-05 20:35:45.511263198 +0000 UTC m=+1832.220447072" Jan 05 20:35:54 crc kubenswrapper[4754]: I0105 20:35:54.588789 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:35:55 crc kubenswrapper[4754]: I0105 20:35:55.619227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28"} Jan 05 20:35:57 crc kubenswrapper[4754]: I0105 20:35:57.936788 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 20:36:06 crc kubenswrapper[4754]: I0105 20:36:06.525145 4754 scope.go:117] "RemoveContainer" containerID="687fafa7cd79de691a485d8eaa980dd12b8f1c241eb506c55ef2fa560f073c42" Jan 05 20:36:06 crc kubenswrapper[4754]: I0105 20:36:06.558658 4754 scope.go:117] "RemoveContainer" containerID="018398e57dd76b95b1a3f87039759cf34ff5365027844eb9826ca3c4a196548a" Jan 05 20:36:06 crc kubenswrapper[4754]: I0105 20:36:06.587552 4754 scope.go:117] "RemoveContainer" containerID="544f40ea5f9ad7b659e8e81028db30af1084c6548cef6985dd30eba18d50bb78" Jan 05 20:36:06 crc kubenswrapper[4754]: I0105 20:36:06.615760 4754 scope.go:117] "RemoveContainer" containerID="91af205e7b3afcd2a6ee643bed05e8d5be3784292b57eebfc57b9aec2ecd7270" Jan 05 20:36:49 crc kubenswrapper[4754]: I0105 20:36:49.877892 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 20:36:49 crc kubenswrapper[4754]: I0105 20:36:49.877970 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 20:36:49 crc kubenswrapper[4754]: I0105 20:36:49.878704 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 20:36:49 crc kubenswrapper[4754]: I0105 20:36:49.878795 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.056382 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-rwh69"] Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.078426 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-cbf0-account-create-update-wjqq8"] Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.096111 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-rwh69"] Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.107729 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-cbf0-account-create-update-wjqq8"] Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.609963 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a1a647-6525-4da5-9e92-5478fc979997" path="/var/lib/kubelet/pods/36a1a647-6525-4da5-9e92-5478fc979997/volumes" Jan 05 20:36:59 crc kubenswrapper[4754]: I0105 20:36:59.612651 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c62584-d1a0-43b1-afe9-ab12f4125396" path="/var/lib/kubelet/pods/a3c62584-d1a0-43b1-afe9-ab12f4125396/volumes" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.663309 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.676931 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.691163 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.782015 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.782193 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.783801 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmh8\" (UniqueName: \"kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.887540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmh8\" (UniqueName: \"kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.887646 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.887707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.888263 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.888728 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:03 crc kubenswrapper[4754]: I0105 20:37:03.909389 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmh8\" (UniqueName: \"kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8\") pod \"redhat-operators-qsdjj\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:04 crc kubenswrapper[4754]: I0105 20:37:04.007332 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:04 crc kubenswrapper[4754]: W0105 20:37:04.575500 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84ee9e9f_cf0e_418f_984a_336b8f441f04.slice/crio-91715eab70159da15c9d9d0f23de996255d7d87a3e9ae054796ee3fa03ff9049 WatchSource:0}: Error finding container 91715eab70159da15c9d9d0f23de996255d7d87a3e9ae054796ee3fa03ff9049: Status 404 returned error can't find the container with id 91715eab70159da15c9d9d0f23de996255d7d87a3e9ae054796ee3fa03ff9049 Jan 05 20:37:04 crc kubenswrapper[4754]: I0105 20:37:04.588679 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:05 crc kubenswrapper[4754]: I0105 20:37:05.600653 4754 generic.go:334] "Generic (PLEG): container finished" podID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerID="053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7" exitCode=0 Jan 05 20:37:05 crc kubenswrapper[4754]: I0105 20:37:05.607164 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerDied","Data":"053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7"} Jan 05 20:37:05 crc kubenswrapper[4754]: I0105 20:37:05.607207 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerStarted","Data":"91715eab70159da15c9d9d0f23de996255d7d87a3e9ae054796ee3fa03ff9049"} Jan 05 20:37:06 crc kubenswrapper[4754]: I0105 20:37:06.719433 4754 scope.go:117] "RemoveContainer" containerID="e0835e632cd18a6ad13cd82a605ee7d6a9ec2d402a0e45a2815a34352cf5fb4e" Jan 05 20:37:06 crc kubenswrapper[4754]: I0105 20:37:06.750434 4754 scope.go:117] "RemoveContainer" containerID="69a17ff3c459065b4d6bb253ef0b04b566234552e4feab0e23d32a280fea9810" Jan 05 20:37:07 crc kubenswrapper[4754]: I0105 20:37:07.630770 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerStarted","Data":"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4"} Jan 05 20:37:08 crc kubenswrapper[4754]: I0105 20:37:08.042511 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gfnpz"] Jan 05 20:37:08 crc kubenswrapper[4754]: I0105 20:37:08.059599 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gfnpz"] Jan 05 20:37:10 crc kubenswrapper[4754]: I0105 20:37:10.038812 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed" path="/var/lib/kubelet/pods/4d70ef99-cdf9-474e-8e77-d75ce8d7a7ed/volumes" Jan 05 20:37:10 crc kubenswrapper[4754]: I0105 20:37:10.672504 4754 generic.go:334] "Generic (PLEG): container finished" podID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerID="74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4" exitCode=0 Jan 05 20:37:10 crc kubenswrapper[4754]: I0105 20:37:10.672566 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerDied","Data":"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4"} Jan 05 20:37:11 crc kubenswrapper[4754]: I0105 20:37:11.684273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerStarted","Data":"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5"} Jan 05 20:37:11 crc kubenswrapper[4754]: I0105 20:37:11.713225 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsdjj" podStartSLOduration=3.198477958 podStartE2EDuration="8.713203647s" podCreationTimestamp="2026-01-05 20:37:03 +0000 UTC" firstStartedPulling="2026-01-05 20:37:05.604588238 +0000 UTC m=+1912.313772112" lastFinishedPulling="2026-01-05 20:37:11.119313927 +0000 UTC m=+1917.828497801" observedRunningTime="2026-01-05 20:37:11.700768221 +0000 UTC m=+1918.409952115" watchObservedRunningTime="2026-01-05 20:37:11.713203647 +0000 UTC m=+1918.422387521" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.065569 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-fb57-account-create-update-kjlds"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.079652 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-fb57-account-create-update-kjlds"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.097371 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7de3-account-create-update-pxbnf"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.108143 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2f6c-account-create-update-krtph"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.118168 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6412-account-create-update-jpj4f"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.128324 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zpltv"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.170967 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-s4cdh"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.189212 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2f6c-account-create-update-krtph"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.203016 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7de3-account-create-update-pxbnf"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.216396 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.243252 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6412-account-create-update-jpj4f"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.258883 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-s4cdh"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.270182 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zpltv"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.281195 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pdnxz"] Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.611335 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177b09ff-4291-4ef0-93f7-66a03d8c0fe2" path="/var/lib/kubelet/pods/177b09ff-4291-4ef0-93f7-66a03d8c0fe2/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.956919 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3003ea-ab70-4762-b50d-bf15c06ea9a7" path="/var/lib/kubelet/pods/3b3003ea-ab70-4762-b50d-bf15c06ea9a7/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.958193 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58066c5a-8387-449d-9004-f2e6a1d37e53" path="/var/lib/kubelet/pods/58066c5a-8387-449d-9004-f2e6a1d37e53/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.959437 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd" path="/var/lib/kubelet/pods/745e7c45-2a3b-4c75-b2a9-a4a1c38ec2fd/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.960506 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8873df75-8512-43e0-8d94-d4e45e6904d4" path="/var/lib/kubelet/pods/8873df75-8512-43e0-8d94-d4e45e6904d4/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.961558 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea55880-d313-447e-a193-e69f30a74375" path="/var/lib/kubelet/pods/9ea55880-d313-447e-a193-e69f30a74375/volumes" Jan 05 20:37:13 crc kubenswrapper[4754]: I0105 20:37:13.962952 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5c2131-a9a1-4345-8ae8-069fba0812a5" path="/var/lib/kubelet/pods/cf5c2131-a9a1-4345-8ae8-069fba0812a5/volumes" Jan 05 20:37:14 crc kubenswrapper[4754]: I0105 20:37:14.007718 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:14 crc kubenswrapper[4754]: I0105 20:37:14.007775 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:15 crc kubenswrapper[4754]: I0105 20:37:15.080507 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsdjj" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" probeResult="failure" output=< Jan 05 20:37:15 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:37:15 crc kubenswrapper[4754]: > Jan 05 20:37:24 crc kubenswrapper[4754]: I0105 20:37:24.045321 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lq6gx"] Jan 05 20:37:24 crc kubenswrapper[4754]: I0105 20:37:24.058505 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lq6gx"] Jan 05 20:37:25 crc kubenswrapper[4754]: I0105 20:37:25.090085 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsdjj" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" probeResult="failure" output=< Jan 05 20:37:25 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:37:25 crc kubenswrapper[4754]: > Jan 05 20:37:25 crc kubenswrapper[4754]: I0105 20:37:25.616480 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe79418-5702-4809-a084-4a14fa936263" path="/var/lib/kubelet/pods/6fe79418-5702-4809-a084-4a14fa936263/volumes" Jan 05 20:37:33 crc kubenswrapper[4754]: I0105 20:37:33.022142 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c0bfa36-b48b-4bba-b358-22c0a1001f5b" containerID="63e7cefea97aa4e6b10206f5e26a3c540317844c9fc239507dfac79ad8ee3fea" exitCode=0 Jan 05 20:37:33 crc kubenswrapper[4754]: I0105 20:37:33.022347 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" event={"ID":"8c0bfa36-b48b-4bba-b358-22c0a1001f5b","Type":"ContainerDied","Data":"63e7cefea97aa4e6b10206f5e26a3c540317844c9fc239507dfac79ad8ee3fea"} Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.087063 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.162397 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.527603 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.628460 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwszc\" (UniqueName: \"kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc\") pod \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.628681 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory\") pod \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.628725 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key\") pod \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.628899 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle\") pod \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\" (UID: \"8c0bfa36-b48b-4bba-b358-22c0a1001f5b\") " Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.636343 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc" (OuterVolumeSpecName: "kube-api-access-mwszc") pod "8c0bfa36-b48b-4bba-b358-22c0a1001f5b" (UID: "8c0bfa36-b48b-4bba-b358-22c0a1001f5b"). InnerVolumeSpecName "kube-api-access-mwszc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.640521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8c0bfa36-b48b-4bba-b358-22c0a1001f5b" (UID: "8c0bfa36-b48b-4bba-b358-22c0a1001f5b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.676628 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory" (OuterVolumeSpecName: "inventory") pod "8c0bfa36-b48b-4bba-b358-22c0a1001f5b" (UID: "8c0bfa36-b48b-4bba-b358-22c0a1001f5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.680265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c0bfa36-b48b-4bba-b358-22c0a1001f5b" (UID: "8c0bfa36-b48b-4bba-b358-22c0a1001f5b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.734115 4754 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.734150 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwszc\" (UniqueName: \"kubernetes.io/projected/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-kube-api-access-mwszc\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.734232 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.734277 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c0bfa36-b48b-4bba-b358-22c0a1001f5b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:34 crc kubenswrapper[4754]: I0105 20:37:34.844088 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.049729 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" event={"ID":"8c0bfa36-b48b-4bba-b358-22c0a1001f5b","Type":"ContainerDied","Data":"63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3"} Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.049774 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63481e670ee46bdab14b88cf57111934d4f18081dbfbc31bc0f099e0067cbbc3" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.050472 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.211352 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7"] Jan 05 20:37:35 crc kubenswrapper[4754]: E0105 20:37:35.211906 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0bfa36-b48b-4bba-b358-22c0a1001f5b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.211925 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0bfa36-b48b-4bba-b358-22c0a1001f5b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.212224 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c0bfa36-b48b-4bba-b358-22c0a1001f5b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.213263 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.218809 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.219133 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.219595 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.219679 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.227589 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7"] Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.353610 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcf2b\" (UniqueName: \"kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.353830 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.354338 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.456431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.456543 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.456692 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcf2b\" (UniqueName: \"kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.469074 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.469436 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.476795 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcf2b\" (UniqueName: \"kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:35 crc kubenswrapper[4754]: I0105 20:37:35.541158 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.062979 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsdjj" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" containerID="cri-o://b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5" gracePeriod=2 Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.139489 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.139819 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7"] Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.423694 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.478532 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcmh8\" (UniqueName: \"kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8\") pod \"84ee9e9f-cf0e-418f-984a-336b8f441f04\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.478603 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content\") pod \"84ee9e9f-cf0e-418f-984a-336b8f441f04\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.478621 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities\") pod \"84ee9e9f-cf0e-418f-984a-336b8f441f04\" (UID: \"84ee9e9f-cf0e-418f-984a-336b8f441f04\") " Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.479397 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities" (OuterVolumeSpecName: "utilities") pod "84ee9e9f-cf0e-418f-984a-336b8f441f04" (UID: "84ee9e9f-cf0e-418f-984a-336b8f441f04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.484069 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8" (OuterVolumeSpecName: "kube-api-access-lcmh8") pod "84ee9e9f-cf0e-418f-984a-336b8f441f04" (UID: "84ee9e9f-cf0e-418f-984a-336b8f441f04"). InnerVolumeSpecName "kube-api-access-lcmh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.581920 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.581956 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcmh8\" (UniqueName: \"kubernetes.io/projected/84ee9e9f-cf0e-418f-984a-336b8f441f04-kube-api-access-lcmh8\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.603206 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84ee9e9f-cf0e-418f-984a-336b8f441f04" (UID: "84ee9e9f-cf0e-418f-984a-336b8f441f04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:37:36 crc kubenswrapper[4754]: I0105 20:37:36.684090 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84ee9e9f-cf0e-418f-984a-336b8f441f04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.079847 4754 generic.go:334] "Generic (PLEG): container finished" podID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerID="b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5" exitCode=0 Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.080166 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerDied","Data":"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5"} Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.080202 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsdjj" event={"ID":"84ee9e9f-cf0e-418f-984a-336b8f441f04","Type":"ContainerDied","Data":"91715eab70159da15c9d9d0f23de996255d7d87a3e9ae054796ee3fa03ff9049"} Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.080222 4754 scope.go:117] "RemoveContainer" containerID="b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.080292 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsdjj" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.082651 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" event={"ID":"74141d35-f30c-469c-a926-7a5274ca536d","Type":"ContainerStarted","Data":"0644ae7403d183fad7a8b685e59cc9db6e98453f033fa5cb127f3c7c88accc80"} Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.082726 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" event={"ID":"74141d35-f30c-469c-a926-7a5274ca536d","Type":"ContainerStarted","Data":"8e6e29a24068d0873a5c54a9fc248351d04cc473c8475a109aea2c4b2bcdc14a"} Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.110806 4754 scope.go:117] "RemoveContainer" containerID="74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.113326 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" podStartSLOduration=1.5171884530000002 podStartE2EDuration="2.113309112s" podCreationTimestamp="2026-01-05 20:37:35 +0000 UTC" firstStartedPulling="2026-01-05 20:37:36.139229374 +0000 UTC m=+1942.848413248" lastFinishedPulling="2026-01-05 20:37:36.735350033 +0000 UTC m=+1943.444533907" observedRunningTime="2026-01-05 20:37:37.110525769 +0000 UTC m=+1943.819709673" watchObservedRunningTime="2026-01-05 20:37:37.113309112 +0000 UTC m=+1943.822492986" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.144385 4754 scope.go:117] "RemoveContainer" containerID="053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.151042 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.165516 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsdjj"] Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.197283 4754 scope.go:117] "RemoveContainer" containerID="b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5" Jan 05 20:37:37 crc kubenswrapper[4754]: E0105 20:37:37.197846 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5\": container with ID starting with b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5 not found: ID does not exist" containerID="b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.197891 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5"} err="failed to get container status \"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5\": rpc error: code = NotFound desc = could not find container \"b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5\": container with ID starting with b286b2dfa4ceea213fa0e13d7d820ed42d5c4679319a6c1bae0c2e602ea23dc5 not found: ID does not exist" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.197918 4754 scope.go:117] "RemoveContainer" containerID="74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4" Jan 05 20:37:37 crc kubenswrapper[4754]: E0105 20:37:37.198461 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4\": container with ID starting with 74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4 not found: ID does not exist" containerID="74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.198483 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4"} err="failed to get container status \"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4\": rpc error: code = NotFound desc = could not find container \"74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4\": container with ID starting with 74036827c3b2a2b450102be76795a32bc53223c2678470583fb2abdf1d3912d4 not found: ID does not exist" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.198498 4754 scope.go:117] "RemoveContainer" containerID="053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7" Jan 05 20:37:37 crc kubenswrapper[4754]: E0105 20:37:37.198993 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7\": container with ID starting with 053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7 not found: ID does not exist" containerID="053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.199015 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7"} err="failed to get container status \"053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7\": rpc error: code = NotFound desc = could not find container \"053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7\": container with ID starting with 053797b28af3ef42a6ff54237d6ca3260665005ba76df9e37179f124c9839db7 not found: ID does not exist" Jan 05 20:37:37 crc kubenswrapper[4754]: I0105 20:37:37.612764 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" path="/var/lib/kubelet/pods/84ee9e9f-cf0e-418f-984a-336b8f441f04/volumes" Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.069575 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gx6mr"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.091402 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-225d-account-create-update-9f5mg"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.138751 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-81e1-account-create-update-dqq7l"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.151501 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pbdm4"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.160183 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-whhvh"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.173009 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-6nvbl"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.183738 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59a7-account-create-update-t9cmc"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.193574 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gx6mr"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.203424 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-225d-account-create-update-9f5mg"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.213363 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4530-account-create-update-6t94n"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.227448 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pbdm4"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.239220 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-81e1-account-create-update-dqq7l"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.250685 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-whhvh"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.264046 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-6nvbl"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.276717 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59a7-account-create-update-t9cmc"] Jan 05 20:37:44 crc kubenswrapper[4754]: I0105 20:37:44.289036 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4530-account-create-update-6t94n"] Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.606554 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac29b87-268c-4066-a428-db4d5b5b595f" path="/var/lib/kubelet/pods/1ac29b87-268c-4066-a428-db4d5b5b595f/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.609695 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495f6c79-65c2-4b10-8695-1d11ee63fa93" path="/var/lib/kubelet/pods/495f6c79-65c2-4b10-8695-1d11ee63fa93/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.613244 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1c1900-32c4-4245-ae03-f71435b2259f" path="/var/lib/kubelet/pods/4a1c1900-32c4-4245-ae03-f71435b2259f/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.617597 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6372e5-4136-46a8-aee5-181b6277ad6b" path="/var/lib/kubelet/pods/5e6372e5-4136-46a8-aee5-181b6277ad6b/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.620809 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656e1694-0fbd-4f0a-b1cd-50594514afd3" path="/var/lib/kubelet/pods/656e1694-0fbd-4f0a-b1cd-50594514afd3/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.624287 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bab2e4a-be19-46b8-8f52-f1b59b44c4f6" path="/var/lib/kubelet/pods/6bab2e4a-be19-46b8-8f52-f1b59b44c4f6/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.627019 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85825127-1783-4a9b-8205-60891099a53e" path="/var/lib/kubelet/pods/85825127-1783-4a9b-8205-60891099a53e/volumes" Jan 05 20:37:45 crc kubenswrapper[4754]: I0105 20:37:45.630624 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1d6939-e90a-46a4-87f0-05abc48224b9" path="/var/lib/kubelet/pods/be1d6939-e90a-46a4-87f0-05abc48224b9/volumes" Jan 05 20:37:55 crc kubenswrapper[4754]: I0105 20:37:55.050504 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zhqf4"] Jan 05 20:37:55 crc kubenswrapper[4754]: I0105 20:37:55.065542 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zhqf4"] Jan 05 20:37:55 crc kubenswrapper[4754]: I0105 20:37:55.613661 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687ba708-6c04-4435-9acd-76dfdb4311e2" path="/var/lib/kubelet/pods/687ba708-6c04-4435-9acd-76dfdb4311e2/volumes" Jan 05 20:37:57 crc kubenswrapper[4754]: I0105 20:37:57.035056 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-j8qkw"] Jan 05 20:37:57 crc kubenswrapper[4754]: I0105 20:37:57.053766 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-j8qkw"] Jan 05 20:37:57 crc kubenswrapper[4754]: I0105 20:37:57.611548 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08106021-9f77-4a54-8ec2-de2bfe4db63c" path="/var/lib/kubelet/pods/08106021-9f77-4a54-8ec2-de2bfe4db63c/volumes" Jan 05 20:38:06 crc kubenswrapper[4754]: I0105 20:38:06.870015 4754 scope.go:117] "RemoveContainer" containerID="6c5ff6faa38623fc188aa6f4d099d68de166aeb6769816fbefbbafdaa495ba8d" Jan 05 20:38:06 crc kubenswrapper[4754]: I0105 20:38:06.908716 4754 scope.go:117] "RemoveContainer" containerID="704b872819837b61c2ef52843447960b6f9c49f83f1718166f553ba4e1602e04" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.001656 4754 scope.go:117] "RemoveContainer" containerID="45874373e82966fda92dbba58cc57ecb378f59a660fae9d37eeda6f3aff61505" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.083399 4754 scope.go:117] "RemoveContainer" containerID="99d91e63a5ebad8b3a8214a7598cf8ef71778f0098ca29e1527ef777cd1af988" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.170471 4754 scope.go:117] "RemoveContainer" containerID="49033cb3a110591ebff17507b044a07ee8d0a70d136d57e7e5dd097082e869c6" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.225065 4754 scope.go:117] "RemoveContainer" containerID="52fe1a81686ee298e7aaaff20bb3e70e2d827582442d4c382c1a2f6d57b860a0" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.296796 4754 scope.go:117] "RemoveContainer" containerID="b787e2af61560104d1cac0b9ac110c21dc4156c7c157ab5c6f8885459197baa9" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.321284 4754 scope.go:117] "RemoveContainer" containerID="503dc31b309feaca547af6cbe95cae3c765190807d3f146653e482cdb320d7f4" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.348429 4754 scope.go:117] "RemoveContainer" containerID="bddede9335a3863f5f6f3f1baa10d638ba1e78a009c4ea03c4d0fb2092b5902e" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.378680 4754 scope.go:117] "RemoveContainer" containerID="f2481a7e32308a3560cf4b6ed88971b004cd7c6ff276c9faf771528546734f5b" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.404503 4754 scope.go:117] "RemoveContainer" containerID="f04f8ec99d2d2635a27636120f8e453b6903b58832062427097677ac517d37df" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.427281 4754 scope.go:117] "RemoveContainer" containerID="83f01fecb7e758b786af5a7077862210a02323ef61016ea06e2b3653a65b02d4" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.453519 4754 scope.go:117] "RemoveContainer" containerID="4caad7c30f61dbd6b6c6ef02f12ab054d334bdf5a12ea0a6f4bcb0aaf737faf4" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.480850 4754 scope.go:117] "RemoveContainer" containerID="a24054b0a175c56e1337c3b973ce7b6f5709f7532a441aee579688787f0b4ee5" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.504558 4754 scope.go:117] "RemoveContainer" containerID="7f57e8c92d7d6c226361a6311cda27e9b68bdcf3938a1c9ade5a48ed14163f6f" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.534615 4754 scope.go:117] "RemoveContainer" containerID="88402220f780722e07e98d69e4607de220f89b445d1a1b47e4f5916d1aa48b4c" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.562578 4754 scope.go:117] "RemoveContainer" containerID="cc1c34ff1f03bb075858d775b280315d819cda46e7ffc141430b72fde14c565a" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.583409 4754 scope.go:117] "RemoveContainer" containerID="d65036cacc8453c4f8ee49771bcc5fa30e35a937e206c23e5f20583f59fc87e6" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.604732 4754 scope.go:117] "RemoveContainer" containerID="d65acbcb780b9254a8dfc2d6215f478c822304f1158b4c400b1e0dfa3c9a76a5" Jan 05 20:38:07 crc kubenswrapper[4754]: I0105 20:38:07.626927 4754 scope.go:117] "RemoveContainer" containerID="b2b0b3aac6efb14d3b8ce9a40a2bc576aab7239fa6fefc0edbc919111a5c01ca" Jan 05 20:38:18 crc kubenswrapper[4754]: I0105 20:38:18.109537 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:38:18 crc kubenswrapper[4754]: I0105 20:38:18.110109 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:38:32 crc kubenswrapper[4754]: I0105 20:38:32.058929 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qz8zq"] Jan 05 20:38:32 crc kubenswrapper[4754]: I0105 20:38:32.071723 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qz8zq"] Jan 05 20:38:33 crc kubenswrapper[4754]: I0105 20:38:33.628743 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd" path="/var/lib/kubelet/pods/0ccb69ed-7f2d-4341-a474-90ac1ed5a0fd/volumes" Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.099702 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xbm2l"] Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.140277 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4c48n"] Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.168072 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4c48n"] Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.184029 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k46bt"] Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.196614 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xbm2l"] Jan 05 20:38:46 crc kubenswrapper[4754]: I0105 20:38:46.214689 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k46bt"] Jan 05 20:38:47 crc kubenswrapper[4754]: I0105 20:38:47.618286 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c0a4c6-d974-402e-bdf8-13c8b2e91b3f" path="/var/lib/kubelet/pods/32c0a4c6-d974-402e-bdf8-13c8b2e91b3f/volumes" Jan 05 20:38:47 crc kubenswrapper[4754]: I0105 20:38:47.621873 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4b9353-df59-4a87-80f7-26a1d1637032" path="/var/lib/kubelet/pods/de4b9353-df59-4a87-80f7-26a1d1637032/volumes" Jan 05 20:38:47 crc kubenswrapper[4754]: I0105 20:38:47.624568 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9642fa4-a57d-443c-be01-f46706cc0368" path="/var/lib/kubelet/pods/e9642fa4-a57d-443c-be01-f46706cc0368/volumes" Jan 05 20:38:48 crc kubenswrapper[4754]: I0105 20:38:48.108701 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:38:48 crc kubenswrapper[4754]: I0105 20:38:48.108966 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:39:03 crc kubenswrapper[4754]: I0105 20:39:03.053127 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-p84n5"] Jan 05 20:39:03 crc kubenswrapper[4754]: I0105 20:39:03.066442 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-p84n5"] Jan 05 20:39:03 crc kubenswrapper[4754]: I0105 20:39:03.608977 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19555bb-f08e-4ff7-a6d3-26615858d3f3" path="/var/lib/kubelet/pods/c19555bb-f08e-4ff7-a6d3-26615858d3f3/volumes" Jan 05 20:39:08 crc kubenswrapper[4754]: I0105 20:39:08.087771 4754 scope.go:117] "RemoveContainer" containerID="dd6c6f551876be27433fe4319c7ec6af0c33eded6d380725912104ff1fc0d498" Jan 05 20:39:08 crc kubenswrapper[4754]: I0105 20:39:08.120756 4754 scope.go:117] "RemoveContainer" containerID="d38b6e4e623d207f39e0ad135bfb3e2e049ab9fa794850c982d0edbcbf0c8543" Jan 05 20:39:08 crc kubenswrapper[4754]: I0105 20:39:08.204486 4754 scope.go:117] "RemoveContainer" containerID="9063940e8012108d9397da061fcc85cfc29a032e1410b1101e932fa332d143ff" Jan 05 20:39:08 crc kubenswrapper[4754]: I0105 20:39:08.261046 4754 scope.go:117] "RemoveContainer" containerID="eac04497aeb8509a8ef80fca61e612c081b39a02cfa9ef48f3b2e6cbe94bcae4" Jan 05 20:39:08 crc kubenswrapper[4754]: I0105 20:39:08.314566 4754 scope.go:117] "RemoveContainer" containerID="51ccfeb802b35dbc949d2d9d994ce5e07e4400bfcd8904b5a9a5bd5a712601d2" Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.109649 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.110430 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.110512 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.111777 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.111907 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28" gracePeriod=600 Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.315441 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28" exitCode=0 Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.315490 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28"} Jan 05 20:39:18 crc kubenswrapper[4754]: I0105 20:39:18.315528 4754 scope.go:117] "RemoveContainer" containerID="1ca93b3e456962380f07853ce3cbf0c6bfbaabb69bd3827ae3e68631c1bd1572" Jan 05 20:39:19 crc kubenswrapper[4754]: I0105 20:39:19.344150 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05"} Jan 05 20:39:45 crc kubenswrapper[4754]: I0105 20:39:45.067783 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gfzxd"] Jan 05 20:39:45 crc kubenswrapper[4754]: I0105 20:39:45.078045 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gfzxd"] Jan 05 20:39:45 crc kubenswrapper[4754]: I0105 20:39:45.616661 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d3e036-3e71-4b72-ae09-2d84f5b12b6d" path="/var/lib/kubelet/pods/53d3e036-3e71-4b72-ae09-2d84f5b12b6d/volumes" Jan 05 20:39:47 crc kubenswrapper[4754]: I0105 20:39:47.030756 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c9ef-account-create-update-zq77j"] Jan 05 20:39:47 crc kubenswrapper[4754]: I0105 20:39:47.041939 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c9ef-account-create-update-zq77j"] Jan 05 20:39:47 crc kubenswrapper[4754]: I0105 20:39:47.601747 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773cb53b-3bb1-4b11-bb3d-c11c352fcdfb" path="/var/lib/kubelet/pods/773cb53b-3bb1-4b11-bb3d-c11c352fcdfb/volumes" Jan 05 20:39:50 crc kubenswrapper[4754]: I0105 20:39:50.040353 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fe16-account-create-update-k6rtd"] Jan 05 20:39:50 crc kubenswrapper[4754]: I0105 20:39:50.057480 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fe16-account-create-update-k6rtd"] Jan 05 20:39:51 crc kubenswrapper[4754]: I0105 20:39:51.607782 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b26ee17-b21e-4bb3-8570-1de5435a6ea5" path="/var/lib/kubelet/pods/9b26ee17-b21e-4bb3-8570-1de5435a6ea5/volumes" Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.050719 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-h42n5"] Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.071129 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kxzd4"] Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.088588 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9214-account-create-update-qwl6t"] Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.104612 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kxzd4"] Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.116508 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-h42n5"] Jan 05 20:39:52 crc kubenswrapper[4754]: I0105 20:39:52.128127 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9214-account-create-update-qwl6t"] Jan 05 20:39:53 crc kubenswrapper[4754]: I0105 20:39:53.623194 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788477a1-9462-462e-ac96-5c6c5659437f" path="/var/lib/kubelet/pods/788477a1-9462-462e-ac96-5c6c5659437f/volumes" Jan 05 20:39:53 crc kubenswrapper[4754]: I0105 20:39:53.636283 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1a8ee9-a244-4b25-acf5-4ab0de607c0d" path="/var/lib/kubelet/pods/8b1a8ee9-a244-4b25-acf5-4ab0de607c0d/volumes" Jan 05 20:39:53 crc kubenswrapper[4754]: I0105 20:39:53.637047 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c40bae7-8986-432e-8050-cec73db2bfdd" path="/var/lib/kubelet/pods/8c40bae7-8986-432e-8050-cec73db2bfdd/volumes" Jan 05 20:40:01 crc kubenswrapper[4754]: I0105 20:40:01.871040 4754 generic.go:334] "Generic (PLEG): container finished" podID="74141d35-f30c-469c-a926-7a5274ca536d" containerID="0644ae7403d183fad7a8b685e59cc9db6e98453f033fa5cb127f3c7c88accc80" exitCode=0 Jan 05 20:40:01 crc kubenswrapper[4754]: I0105 20:40:01.871111 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" event={"ID":"74141d35-f30c-469c-a926-7a5274ca536d","Type":"ContainerDied","Data":"0644ae7403d183fad7a8b685e59cc9db6e98453f033fa5cb127f3c7c88accc80"} Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.407156 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.492241 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory\") pod \"74141d35-f30c-469c-a926-7a5274ca536d\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.492492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcf2b\" (UniqueName: \"kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b\") pod \"74141d35-f30c-469c-a926-7a5274ca536d\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.492537 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key\") pod \"74141d35-f30c-469c-a926-7a5274ca536d\" (UID: \"74141d35-f30c-469c-a926-7a5274ca536d\") " Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.511774 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b" (OuterVolumeSpecName: "kube-api-access-kcf2b") pod "74141d35-f30c-469c-a926-7a5274ca536d" (UID: "74141d35-f30c-469c-a926-7a5274ca536d"). InnerVolumeSpecName "kube-api-access-kcf2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.531636 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory" (OuterVolumeSpecName: "inventory") pod "74141d35-f30c-469c-a926-7a5274ca536d" (UID: "74141d35-f30c-469c-a926-7a5274ca536d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.531812 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "74141d35-f30c-469c-a926-7a5274ca536d" (UID: "74141d35-f30c-469c-a926-7a5274ca536d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.595978 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.596013 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74141d35-f30c-469c-a926-7a5274ca536d-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.596027 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcf2b\" (UniqueName: \"kubernetes.io/projected/74141d35-f30c-469c-a926-7a5274ca536d-kube-api-access-kcf2b\") on node \"crc\" DevicePath \"\"" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.892147 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" event={"ID":"74141d35-f30c-469c-a926-7a5274ca536d","Type":"ContainerDied","Data":"8e6e29a24068d0873a5c54a9fc248351d04cc473c8475a109aea2c4b2bcdc14a"} Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.892504 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e6e29a24068d0873a5c54a9fc248351d04cc473c8475a109aea2c4b2bcdc14a" Jan 05 20:40:03 crc kubenswrapper[4754]: I0105 20:40:03.892226 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.004890 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6"] Jan 05 20:40:04 crc kubenswrapper[4754]: E0105 20:40:04.005955 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.006075 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" Jan 05 20:40:04 crc kubenswrapper[4754]: E0105 20:40:04.006176 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="extract-utilities" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.006255 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="extract-utilities" Jan 05 20:40:04 crc kubenswrapper[4754]: E0105 20:40:04.006358 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="extract-content" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.006444 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="extract-content" Jan 05 20:40:04 crc kubenswrapper[4754]: E0105 20:40:04.006534 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74141d35-f30c-469c-a926-7a5274ca536d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.006634 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="74141d35-f30c-469c-a926-7a5274ca536d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.007085 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ee9e9f-cf0e-418f-984a-336b8f441f04" containerName="registry-server" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.007221 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="74141d35-f30c-469c-a926-7a5274ca536d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.008192 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.010419 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.010726 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.010905 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.010928 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.025336 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6"] Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.127100 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.127220 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz4j\" (UniqueName: \"kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.127257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.229709 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz4j\" (UniqueName: \"kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.229755 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.229924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.240999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.241002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.248059 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz4j\" (UniqueName: \"kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.328588 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:40:04 crc kubenswrapper[4754]: I0105 20:40:04.934695 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6"] Jan 05 20:40:05 crc kubenswrapper[4754]: I0105 20:40:05.916623 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" event={"ID":"3647d334-c168-474a-aabd-b8d4f6461466","Type":"ContainerStarted","Data":"006156faa37adf784193bebd33b59f016c52f789f0a6a68082be0c54295841e2"} Jan 05 20:40:05 crc kubenswrapper[4754]: I0105 20:40:05.916905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" event={"ID":"3647d334-c168-474a-aabd-b8d4f6461466","Type":"ContainerStarted","Data":"9c46ad8addb69604fddc3e8b61616d2b6c3ee2367f3608626a833d05af75dad6"} Jan 05 20:40:05 crc kubenswrapper[4754]: I0105 20:40:05.940873 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" podStartSLOduration=2.426735394 podStartE2EDuration="2.940857833s" podCreationTimestamp="2026-01-05 20:40:03 +0000 UTC" firstStartedPulling="2026-01-05 20:40:04.92994493 +0000 UTC m=+2091.639128804" lastFinishedPulling="2026-01-05 20:40:05.444067369 +0000 UTC m=+2092.153251243" observedRunningTime="2026-01-05 20:40:05.936234582 +0000 UTC m=+2092.645418486" watchObservedRunningTime="2026-01-05 20:40:05.940857833 +0000 UTC m=+2092.650041717" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.502751 4754 scope.go:117] "RemoveContainer" containerID="2f0c7cf0b8e9866f348e874feb37c9cd18d26a7e639f39ac3132ab6d47030e08" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.535869 4754 scope.go:117] "RemoveContainer" containerID="37d7cf033c363ff59fbfcfe09bf44d181103a14cee337cb684e0d9dfd0fd7253" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.612662 4754 scope.go:117] "RemoveContainer" containerID="6bd610def9df2801ebca785fff696a92be9ede2f222011869f053647e34a32e8" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.669902 4754 scope.go:117] "RemoveContainer" containerID="7fddd300ab96ea5856577fb2f60617fa4f83f23e53ce97f9b86e8e18703eb154" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.743515 4754 scope.go:117] "RemoveContainer" containerID="bc6c354af6970b2e09f9c0388dad3fd50f65999c5cdd9e4ef42ede3e87b8fdb2" Jan 05 20:40:08 crc kubenswrapper[4754]: I0105 20:40:08.807521 4754 scope.go:117] "RemoveContainer" containerID="fcfebfe7efc7db30ceebb0593d1a194b213ec8ffefb2bb05999f6d5c5a9d5f60" Jan 05 20:40:34 crc kubenswrapper[4754]: I0105 20:40:34.070062 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwk7q"] Jan 05 20:40:34 crc kubenswrapper[4754]: I0105 20:40:34.107764 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwk7q"] Jan 05 20:40:35 crc kubenswrapper[4754]: I0105 20:40:35.626579 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1192f0c3-2df8-44ea-a767-937f965b46f3" path="/var/lib/kubelet/pods/1192f0c3-2df8-44ea-a767-937f965b46f3/volumes" Jan 05 20:40:57 crc kubenswrapper[4754]: I0105 20:40:57.065749 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-6fb0-account-create-update-9z9ll"] Jan 05 20:40:57 crc kubenswrapper[4754]: I0105 20:40:57.094081 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-6fb0-account-create-update-9z9ll"] Jan 05 20:40:57 crc kubenswrapper[4754]: I0105 20:40:57.610497 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95650db6-2a7d-4c11-985e-4eae13a8cbaa" path="/var/lib/kubelet/pods/95650db6-2a7d-4c11-985e-4eae13a8cbaa/volumes" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.041181 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-t2948"] Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.055387 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-t2948"] Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.716541 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.725613 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.742400 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.806062 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.806123 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rvf\" (UniqueName: \"kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.806218 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.908566 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.908716 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.908757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rvf\" (UniqueName: \"kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.909382 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.909748 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:58 crc kubenswrapper[4754]: I0105 20:40:58.930265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rvf\" (UniqueName: \"kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf\") pod \"community-operators-bpmq8\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:59 crc kubenswrapper[4754]: I0105 20:40:59.063753 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:40:59 crc kubenswrapper[4754]: I0105 20:40:59.601201 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2863d98c-728e-45a8-8137-50decec5ac8b" path="/var/lib/kubelet/pods/2863d98c-728e-45a8-8137-50decec5ac8b/volumes" Jan 05 20:40:59 crc kubenswrapper[4754]: I0105 20:40:59.672330 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:40:59 crc kubenswrapper[4754]: I0105 20:40:59.688314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerStarted","Data":"a8fcde696f94b3ff87edf708244dcd078504e2dccffe92746f46a8134c98de4e"} Jan 05 20:41:00 crc kubenswrapper[4754]: I0105 20:41:00.701721 4754 generic.go:334] "Generic (PLEG): container finished" podID="c9627687-9ef8-4195-b831-904880820edb" containerID="396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433" exitCode=0 Jan 05 20:41:00 crc kubenswrapper[4754]: I0105 20:41:00.701907 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerDied","Data":"396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433"} Jan 05 20:41:01 crc kubenswrapper[4754]: I0105 20:41:01.063877 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mgrzb"] Jan 05 20:41:01 crc kubenswrapper[4754]: I0105 20:41:01.076090 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mgrzb"] Jan 05 20:41:01 crc kubenswrapper[4754]: I0105 20:41:01.601437 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89edcd71-f0ac-4bf6-a1cf-9aac0d041906" path="/var/lib/kubelet/pods/89edcd71-f0ac-4bf6-a1cf-9aac0d041906/volumes" Jan 05 20:41:02 crc kubenswrapper[4754]: I0105 20:41:02.730589 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerStarted","Data":"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410"} Jan 05 20:41:03 crc kubenswrapper[4754]: I0105 20:41:03.038171 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hd9mq"] Jan 05 20:41:03 crc kubenswrapper[4754]: I0105 20:41:03.080863 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hd9mq"] Jan 05 20:41:03 crc kubenswrapper[4754]: I0105 20:41:03.612178 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4556aa44-d7b9-4f17-82e1-7c08aee8d75c" path="/var/lib/kubelet/pods/4556aa44-d7b9-4f17-82e1-7c08aee8d75c/volumes" Jan 05 20:41:03 crc kubenswrapper[4754]: I0105 20:41:03.742954 4754 generic.go:334] "Generic (PLEG): container finished" podID="c9627687-9ef8-4195-b831-904880820edb" containerID="ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410" exitCode=0 Jan 05 20:41:03 crc kubenswrapper[4754]: I0105 20:41:03.742994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerDied","Data":"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410"} Jan 05 20:41:04 crc kubenswrapper[4754]: I0105 20:41:04.757179 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerStarted","Data":"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a"} Jan 05 20:41:04 crc kubenswrapper[4754]: I0105 20:41:04.781525 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpmq8" podStartSLOduration=3.277498573 podStartE2EDuration="6.781504118s" podCreationTimestamp="2026-01-05 20:40:58 +0000 UTC" firstStartedPulling="2026-01-05 20:41:00.704108441 +0000 UTC m=+2147.413292335" lastFinishedPulling="2026-01-05 20:41:04.208113996 +0000 UTC m=+2150.917297880" observedRunningTime="2026-01-05 20:41:04.78003189 +0000 UTC m=+2151.489215804" watchObservedRunningTime="2026-01-05 20:41:04.781504118 +0000 UTC m=+2151.490688002" Jan 05 20:41:08 crc kubenswrapper[4754]: I0105 20:41:08.943527 4754 scope.go:117] "RemoveContainer" containerID="e774da7d7cac749a9db5c36d80b96ed17a763a8adc64e09d25e8a5748c390674" Jan 05 20:41:08 crc kubenswrapper[4754]: I0105 20:41:08.968389 4754 scope.go:117] "RemoveContainer" containerID="216da0363fa04b0f41f940c392bc4fa5b8d76cdd41269d5c17b808738e5e0485" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.052130 4754 scope.go:117] "RemoveContainer" containerID="fa7b9e71ab44f9ef2670c810cbb2d61ee7e3a5775dbb56a3d6dc4a81ebdd8c62" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.064934 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.064974 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.106967 4754 scope.go:117] "RemoveContainer" containerID="24c367b8e96f1d402bc3faae607541f034766953f4afd27b72c9e6c7b7b9256a" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.124285 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.159909 4754 scope.go:117] "RemoveContainer" containerID="61f5a488850b73c362d6b88ed4ee7d3f254b08dc53bfea5806ebd9df9fdebb1b" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.901559 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:09 crc kubenswrapper[4754]: I0105 20:41:09.973919 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:41:11 crc kubenswrapper[4754]: I0105 20:41:11.861069 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpmq8" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="registry-server" containerID="cri-o://2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a" gracePeriod=2 Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.491125 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.588502 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9rvf\" (UniqueName: \"kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf\") pod \"c9627687-9ef8-4195-b831-904880820edb\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.588557 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content\") pod \"c9627687-9ef8-4195-b831-904880820edb\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.588757 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities\") pod \"c9627687-9ef8-4195-b831-904880820edb\" (UID: \"c9627687-9ef8-4195-b831-904880820edb\") " Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.589985 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities" (OuterVolumeSpecName: "utilities") pod "c9627687-9ef8-4195-b831-904880820edb" (UID: "c9627687-9ef8-4195-b831-904880820edb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.594443 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf" (OuterVolumeSpecName: "kube-api-access-t9rvf") pod "c9627687-9ef8-4195-b831-904880820edb" (UID: "c9627687-9ef8-4195-b831-904880820edb"). InnerVolumeSpecName "kube-api-access-t9rvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.642556 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9627687-9ef8-4195-b831-904880820edb" (UID: "c9627687-9ef8-4195-b831-904880820edb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.692056 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9rvf\" (UniqueName: \"kubernetes.io/projected/c9627687-9ef8-4195-b831-904880820edb-kube-api-access-t9rvf\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.692095 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.692113 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9627687-9ef8-4195-b831-904880820edb-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.874736 4754 generic.go:334] "Generic (PLEG): container finished" podID="c9627687-9ef8-4195-b831-904880820edb" containerID="2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a" exitCode=0 Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.874780 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerDied","Data":"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a"} Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.874823 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpmq8" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.874826 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpmq8" event={"ID":"c9627687-9ef8-4195-b831-904880820edb","Type":"ContainerDied","Data":"a8fcde696f94b3ff87edf708244dcd078504e2dccffe92746f46a8134c98de4e"} Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.874839 4754 scope.go:117] "RemoveContainer" containerID="2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.901060 4754 scope.go:117] "RemoveContainer" containerID="ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.928348 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.940685 4754 scope.go:117] "RemoveContainer" containerID="396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433" Jan 05 20:41:12 crc kubenswrapper[4754]: I0105 20:41:12.940819 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpmq8"] Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.008794 4754 scope.go:117] "RemoveContainer" containerID="2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a" Jan 05 20:41:13 crc kubenswrapper[4754]: E0105 20:41:13.009342 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a\": container with ID starting with 2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a not found: ID does not exist" containerID="2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.009416 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a"} err="failed to get container status \"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a\": rpc error: code = NotFound desc = could not find container \"2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a\": container with ID starting with 2164cddc61d19133085c4dd909d080ff4a53ac49417eee8dd276e52447221d9a not found: ID does not exist" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.009464 4754 scope.go:117] "RemoveContainer" containerID="ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410" Jan 05 20:41:13 crc kubenswrapper[4754]: E0105 20:41:13.010225 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410\": container with ID starting with ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410 not found: ID does not exist" containerID="ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.010288 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410"} err="failed to get container status \"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410\": rpc error: code = NotFound desc = could not find container \"ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410\": container with ID starting with ff7271baee42a1ad773b08d9da3e3b3279c3c8d09229ee9f1bcb5acb4faeb410 not found: ID does not exist" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.010375 4754 scope.go:117] "RemoveContainer" containerID="396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433" Jan 05 20:41:13 crc kubenswrapper[4754]: E0105 20:41:13.010674 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433\": container with ID starting with 396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433 not found: ID does not exist" containerID="396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.010716 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433"} err="failed to get container status \"396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433\": rpc error: code = NotFound desc = could not find container \"396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433\": container with ID starting with 396d38d067ab06583117a068e79448f7e924ac3b3ea55e6fc092d57606e96433 not found: ID does not exist" Jan 05 20:41:13 crc kubenswrapper[4754]: I0105 20:41:13.621584 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9627687-9ef8-4195-b831-904880820edb" path="/var/lib/kubelet/pods/c9627687-9ef8-4195-b831-904880820edb/volumes" Jan 05 20:41:18 crc kubenswrapper[4754]: I0105 20:41:18.109195 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:41:18 crc kubenswrapper[4754]: I0105 20:41:18.109803 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:41:27 crc kubenswrapper[4754]: I0105 20:41:27.073271 4754 generic.go:334] "Generic (PLEG): container finished" podID="3647d334-c168-474a-aabd-b8d4f6461466" containerID="006156faa37adf784193bebd33b59f016c52f789f0a6a68082be0c54295841e2" exitCode=0 Jan 05 20:41:27 crc kubenswrapper[4754]: I0105 20:41:27.073336 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" event={"ID":"3647d334-c168-474a-aabd-b8d4f6461466","Type":"ContainerDied","Data":"006156faa37adf784193bebd33b59f016c52f789f0a6a68082be0c54295841e2"} Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.688263 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.766697 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key\") pod \"3647d334-c168-474a-aabd-b8d4f6461466\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.766873 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfz4j\" (UniqueName: \"kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j\") pod \"3647d334-c168-474a-aabd-b8d4f6461466\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.766970 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory\") pod \"3647d334-c168-474a-aabd-b8d4f6461466\" (UID: \"3647d334-c168-474a-aabd-b8d4f6461466\") " Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.773604 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j" (OuterVolumeSpecName: "kube-api-access-kfz4j") pod "3647d334-c168-474a-aabd-b8d4f6461466" (UID: "3647d334-c168-474a-aabd-b8d4f6461466"). InnerVolumeSpecName "kube-api-access-kfz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.805599 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory" (OuterVolumeSpecName: "inventory") pod "3647d334-c168-474a-aabd-b8d4f6461466" (UID: "3647d334-c168-474a-aabd-b8d4f6461466"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.823894 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3647d334-c168-474a-aabd-b8d4f6461466" (UID: "3647d334-c168-474a-aabd-b8d4f6461466"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.869409 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.869440 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfz4j\" (UniqueName: \"kubernetes.io/projected/3647d334-c168-474a-aabd-b8d4f6461466-kube-api-access-kfz4j\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:28 crc kubenswrapper[4754]: I0105 20:41:28.869451 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3647d334-c168-474a-aabd-b8d4f6461466-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.098988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" event={"ID":"3647d334-c168-474a-aabd-b8d4f6461466","Type":"ContainerDied","Data":"9c46ad8addb69604fddc3e8b61616d2b6c3ee2367f3608626a833d05af75dad6"} Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.099046 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c46ad8addb69604fddc3e8b61616d2b6c3ee2367f3608626a833d05af75dad6" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.099054 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.219145 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw"] Jan 05 20:41:29 crc kubenswrapper[4754]: E0105 20:41:29.219778 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="extract-content" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.219799 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="extract-content" Jan 05 20:41:29 crc kubenswrapper[4754]: E0105 20:41:29.219814 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="extract-utilities" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.219821 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="extract-utilities" Jan 05 20:41:29 crc kubenswrapper[4754]: E0105 20:41:29.219856 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3647d334-c168-474a-aabd-b8d4f6461466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.219866 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3647d334-c168-474a-aabd-b8d4f6461466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:29 crc kubenswrapper[4754]: E0105 20:41:29.219877 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="registry-server" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.219884 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="registry-server" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.220148 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9627687-9ef8-4195-b831-904880820edb" containerName="registry-server" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.220182 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3647d334-c168-474a-aabd-b8d4f6461466" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.221097 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.224864 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.225824 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.225838 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.232135 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.253083 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw"] Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.279671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.279842 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4rf9\" (UniqueName: \"kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.280030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.382685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.382826 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4rf9\" (UniqueName: \"kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.382955 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.387741 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.390612 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.401814 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4rf9\" (UniqueName: \"kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:29 crc kubenswrapper[4754]: I0105 20:41:29.551332 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:30 crc kubenswrapper[4754]: I0105 20:41:30.162799 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw"] Jan 05 20:41:31 crc kubenswrapper[4754]: I0105 20:41:31.121167 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" event={"ID":"c70de0fb-71a9-4970-a485-a3a1f3c18868","Type":"ContainerStarted","Data":"0a32573e535b4bf10f31a5afca182f06751415eb46c247e8899fbadd96a72c3d"} Jan 05 20:41:31 crc kubenswrapper[4754]: I0105 20:41:31.121646 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" event={"ID":"c70de0fb-71a9-4970-a485-a3a1f3c18868","Type":"ContainerStarted","Data":"b3f70bf061e45302564b003b3857824d80ac6becdd24f24d8b9737d14cbd00ca"} Jan 05 20:41:31 crc kubenswrapper[4754]: I0105 20:41:31.150000 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" podStartSLOduration=1.664774449 podStartE2EDuration="2.1499803s" podCreationTimestamp="2026-01-05 20:41:29 +0000 UTC" firstStartedPulling="2026-01-05 20:41:30.159232437 +0000 UTC m=+2176.868416311" lastFinishedPulling="2026-01-05 20:41:30.644438288 +0000 UTC m=+2177.353622162" observedRunningTime="2026-01-05 20:41:31.145531874 +0000 UTC m=+2177.854715758" watchObservedRunningTime="2026-01-05 20:41:31.1499803 +0000 UTC m=+2177.859164184" Jan 05 20:41:37 crc kubenswrapper[4754]: I0105 20:41:37.196050 4754 generic.go:334] "Generic (PLEG): container finished" podID="c70de0fb-71a9-4970-a485-a3a1f3c18868" containerID="0a32573e535b4bf10f31a5afca182f06751415eb46c247e8899fbadd96a72c3d" exitCode=0 Jan 05 20:41:37 crc kubenswrapper[4754]: I0105 20:41:37.196206 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" event={"ID":"c70de0fb-71a9-4970-a485-a3a1f3c18868","Type":"ContainerDied","Data":"0a32573e535b4bf10f31a5afca182f06751415eb46c247e8899fbadd96a72c3d"} Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.833952 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.965046 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key\") pod \"c70de0fb-71a9-4970-a485-a3a1f3c18868\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.965380 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4rf9\" (UniqueName: \"kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9\") pod \"c70de0fb-71a9-4970-a485-a3a1f3c18868\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.965426 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory\") pod \"c70de0fb-71a9-4970-a485-a3a1f3c18868\" (UID: \"c70de0fb-71a9-4970-a485-a3a1f3c18868\") " Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.970602 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9" (OuterVolumeSpecName: "kube-api-access-j4rf9") pod "c70de0fb-71a9-4970-a485-a3a1f3c18868" (UID: "c70de0fb-71a9-4970-a485-a3a1f3c18868"). InnerVolumeSpecName "kube-api-access-j4rf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:41:38 crc kubenswrapper[4754]: I0105 20:41:38.997947 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory" (OuterVolumeSpecName: "inventory") pod "c70de0fb-71a9-4970-a485-a3a1f3c18868" (UID: "c70de0fb-71a9-4970-a485-a3a1f3c18868"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.007158 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c70de0fb-71a9-4970-a485-a3a1f3c18868" (UID: "c70de0fb-71a9-4970-a485-a3a1f3c18868"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.068340 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4rf9\" (UniqueName: \"kubernetes.io/projected/c70de0fb-71a9-4970-a485-a3a1f3c18868-kube-api-access-j4rf9\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.068378 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.068391 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70de0fb-71a9-4970-a485-a3a1f3c18868-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.234454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" event={"ID":"c70de0fb-71a9-4970-a485-a3a1f3c18868","Type":"ContainerDied","Data":"b3f70bf061e45302564b003b3857824d80ac6becdd24f24d8b9737d14cbd00ca"} Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.234742 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f70bf061e45302564b003b3857824d80ac6becdd24f24d8b9737d14cbd00ca" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.234552 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.318309 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk"] Jan 05 20:41:39 crc kubenswrapper[4754]: E0105 20:41:39.318952 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70de0fb-71a9-4970-a485-a3a1f3c18868" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.318969 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70de0fb-71a9-4970-a485-a3a1f3c18868" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.319197 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70de0fb-71a9-4970-a485-a3a1f3c18868" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.320613 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.323240 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.323413 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.323631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.324792 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.330642 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk"] Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.484043 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgtl\" (UniqueName: \"kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.484130 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.484186 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.586281 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkgtl\" (UniqueName: \"kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.586403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.586468 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.592053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.612766 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.638117 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkgtl\" (UniqueName: \"kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zhgpk\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:39 crc kubenswrapper[4754]: I0105 20:41:39.671882 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:41:40 crc kubenswrapper[4754]: I0105 20:41:40.377553 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk"] Jan 05 20:41:41 crc kubenswrapper[4754]: I0105 20:41:41.271716 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" event={"ID":"97b90149-7943-405c-ae5c-039a890b61a7","Type":"ContainerStarted","Data":"bc5d94d78474bc0aeff445ea1ebd4ac904c9c44d09c905bca35c9baa20411c3c"} Jan 05 20:41:42 crc kubenswrapper[4754]: I0105 20:41:42.285500 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" event={"ID":"97b90149-7943-405c-ae5c-039a890b61a7","Type":"ContainerStarted","Data":"d281fca186b58e4b09c3772422780e7ec07569216e34546224f578eed4b6e001"} Jan 05 20:41:42 crc kubenswrapper[4754]: I0105 20:41:42.314538 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" podStartSLOduration=2.65425387 podStartE2EDuration="3.314516521s" podCreationTimestamp="2026-01-05 20:41:39 +0000 UTC" firstStartedPulling="2026-01-05 20:41:40.381285418 +0000 UTC m=+2187.090469292" lastFinishedPulling="2026-01-05 20:41:41.041548029 +0000 UTC m=+2187.750731943" observedRunningTime="2026-01-05 20:41:42.307069516 +0000 UTC m=+2189.016253400" watchObservedRunningTime="2026-01-05 20:41:42.314516521 +0000 UTC m=+2189.023700395" Jan 05 20:41:45 crc kubenswrapper[4754]: I0105 20:41:45.054659 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jkkk2"] Jan 05 20:41:45 crc kubenswrapper[4754]: I0105 20:41:45.073009 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jkkk2"] Jan 05 20:41:45 crc kubenswrapper[4754]: I0105 20:41:45.602281 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486e225f-fd48-4c4d-a277-62ae1886a9f5" path="/var/lib/kubelet/pods/486e225f-fd48-4c4d-a277-62ae1886a9f5/volumes" Jan 05 20:41:48 crc kubenswrapper[4754]: I0105 20:41:48.110044 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:41:48 crc kubenswrapper[4754]: I0105 20:41:48.110784 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:42:09 crc kubenswrapper[4754]: I0105 20:42:09.355061 4754 scope.go:117] "RemoveContainer" containerID="e821d89ef64b2a974f742d28d654296a0cf584d3e321b721ea1f772a216b8d41" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.109020 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.110586 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.110665 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.111771 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.111854 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" gracePeriod=600 Jan 05 20:42:18 crc kubenswrapper[4754]: E0105 20:42:18.234410 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.737408 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" exitCode=0 Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.737971 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05"} Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.738033 4754 scope.go:117] "RemoveContainer" containerID="e40575342b315983c6e6787d98aec3d2cd7b920f88c62fc6f121b384bc5e6f28" Jan 05 20:42:18 crc kubenswrapper[4754]: I0105 20:42:18.738928 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:42:18 crc kubenswrapper[4754]: E0105 20:42:18.742836 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:42:27 crc kubenswrapper[4754]: I0105 20:42:27.840251 4754 generic.go:334] "Generic (PLEG): container finished" podID="97b90149-7943-405c-ae5c-039a890b61a7" containerID="d281fca186b58e4b09c3772422780e7ec07569216e34546224f578eed4b6e001" exitCode=0 Jan 05 20:42:27 crc kubenswrapper[4754]: I0105 20:42:27.840380 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" event={"ID":"97b90149-7943-405c-ae5c-039a890b61a7","Type":"ContainerDied","Data":"d281fca186b58e4b09c3772422780e7ec07569216e34546224f578eed4b6e001"} Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.371630 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.474271 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key\") pod \"97b90149-7943-405c-ae5c-039a890b61a7\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.474605 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory\") pod \"97b90149-7943-405c-ae5c-039a890b61a7\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.474703 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkgtl\" (UniqueName: \"kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl\") pod \"97b90149-7943-405c-ae5c-039a890b61a7\" (UID: \"97b90149-7943-405c-ae5c-039a890b61a7\") " Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.481014 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl" (OuterVolumeSpecName: "kube-api-access-fkgtl") pod "97b90149-7943-405c-ae5c-039a890b61a7" (UID: "97b90149-7943-405c-ae5c-039a890b61a7"). InnerVolumeSpecName "kube-api-access-fkgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.506158 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory" (OuterVolumeSpecName: "inventory") pod "97b90149-7943-405c-ae5c-039a890b61a7" (UID: "97b90149-7943-405c-ae5c-039a890b61a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.514781 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97b90149-7943-405c-ae5c-039a890b61a7" (UID: "97b90149-7943-405c-ae5c-039a890b61a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.577950 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.577987 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b90149-7943-405c-ae5c-039a890b61a7-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.578004 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkgtl\" (UniqueName: \"kubernetes.io/projected/97b90149-7943-405c-ae5c-039a890b61a7-kube-api-access-fkgtl\") on node \"crc\" DevicePath \"\"" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.864653 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" event={"ID":"97b90149-7943-405c-ae5c-039a890b61a7","Type":"ContainerDied","Data":"bc5d94d78474bc0aeff445ea1ebd4ac904c9c44d09c905bca35c9baa20411c3c"} Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.864699 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5d94d78474bc0aeff445ea1ebd4ac904c9c44d09c905bca35c9baa20411c3c" Jan 05 20:42:29 crc kubenswrapper[4754]: I0105 20:42:29.864742 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zhgpk" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.009821 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg"] Jan 05 20:42:30 crc kubenswrapper[4754]: E0105 20:42:30.010580 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b90149-7943-405c-ae5c-039a890b61a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.010614 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b90149-7943-405c-ae5c-039a890b61a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.011023 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b90149-7943-405c-ae5c-039a890b61a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.012218 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.013892 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.014625 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.014868 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.016074 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.027922 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg"] Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.092587 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmp6\" (UniqueName: \"kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.092765 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.092900 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.196514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmp6\" (UniqueName: \"kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.196688 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.196825 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.205861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.207779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.221764 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmp6\" (UniqueName: \"kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-927hg\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:30 crc kubenswrapper[4754]: I0105 20:42:30.345344 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:42:31 crc kubenswrapper[4754]: I0105 20:42:31.046091 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg"] Jan 05 20:42:31 crc kubenswrapper[4754]: I0105 20:42:31.916506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" event={"ID":"127f9233-b76e-47f9-bf65-be5a29ed4c79","Type":"ContainerStarted","Data":"98ca9bc2ec3e150c1d7b27e5a19c21aa597b7e6cd5ff757037c48b102aa57dad"} Jan 05 20:42:31 crc kubenswrapper[4754]: I0105 20:42:31.916814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" event={"ID":"127f9233-b76e-47f9-bf65-be5a29ed4c79","Type":"ContainerStarted","Data":"c790ffb7627a3c8e9c5e3f82942742bbdf23c88c606109f729eaca6f8327a8f2"} Jan 05 20:42:31 crc kubenswrapper[4754]: I0105 20:42:31.935541 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" podStartSLOduration=2.422792822 podStartE2EDuration="2.935509494s" podCreationTimestamp="2026-01-05 20:42:29 +0000 UTC" firstStartedPulling="2026-01-05 20:42:31.08013708 +0000 UTC m=+2237.789320954" lastFinishedPulling="2026-01-05 20:42:31.592853722 +0000 UTC m=+2238.302037626" observedRunningTime="2026-01-05 20:42:31.934067957 +0000 UTC m=+2238.643251841" watchObservedRunningTime="2026-01-05 20:42:31.935509494 +0000 UTC m=+2238.644693408" Jan 05 20:42:33 crc kubenswrapper[4754]: I0105 20:42:33.604025 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:42:33 crc kubenswrapper[4754]: E0105 20:42:33.605114 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.224986 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.229455 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.244402 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.310750 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.310792 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jct6\" (UniqueName: \"kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.310841 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.413889 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.414235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jct6\" (UniqueName: \"kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.414428 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.415358 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.415720 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.433483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jct6\" (UniqueName: \"kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6\") pod \"redhat-marketplace-mwd56\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:45 crc kubenswrapper[4754]: I0105 20:42:45.572424 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:46 crc kubenswrapper[4754]: I0105 20:42:46.175815 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:42:47 crc kubenswrapper[4754]: I0105 20:42:47.107238 4754 generic.go:334] "Generic (PLEG): container finished" podID="02249eac-c3b0-4c40-a537-832337f4cff6" containerID="2d0357cf1398f8c6141ecc64ae1d6084e8fdbfcd9b96c73f6024cc19ab187132" exitCode=0 Jan 05 20:42:47 crc kubenswrapper[4754]: I0105 20:42:47.107358 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerDied","Data":"2d0357cf1398f8c6141ecc64ae1d6084e8fdbfcd9b96c73f6024cc19ab187132"} Jan 05 20:42:47 crc kubenswrapper[4754]: I0105 20:42:47.107704 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerStarted","Data":"30bf1cab14ab6403e5af4320c169c69de72de48624de9dda61e909c5c08979de"} Jan 05 20:42:47 crc kubenswrapper[4754]: I0105 20:42:47.110435 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:42:48 crc kubenswrapper[4754]: I0105 20:42:48.589097 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:42:48 crc kubenswrapper[4754]: E0105 20:42:48.589917 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:42:49 crc kubenswrapper[4754]: I0105 20:42:49.150184 4754 generic.go:334] "Generic (PLEG): container finished" podID="02249eac-c3b0-4c40-a537-832337f4cff6" containerID="1c0c2f3221dd43b45b047076b86f97800cb9f91caf99a3be1de138aaf668e493" exitCode=0 Jan 05 20:42:49 crc kubenswrapper[4754]: I0105 20:42:49.150465 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerDied","Data":"1c0c2f3221dd43b45b047076b86f97800cb9f91caf99a3be1de138aaf668e493"} Jan 05 20:42:50 crc kubenswrapper[4754]: I0105 20:42:50.164815 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerStarted","Data":"19ac6a3323964f86699356bc73f761fde2369ad145a3466c525beb97635b36f8"} Jan 05 20:42:50 crc kubenswrapper[4754]: I0105 20:42:50.203077 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwd56" podStartSLOduration=2.624723058 podStartE2EDuration="5.203051214s" podCreationTimestamp="2026-01-05 20:42:45 +0000 UTC" firstStartedPulling="2026-01-05 20:42:47.109910711 +0000 UTC m=+2253.819094635" lastFinishedPulling="2026-01-05 20:42:49.688238877 +0000 UTC m=+2256.397422791" observedRunningTime="2026-01-05 20:42:50.185490144 +0000 UTC m=+2256.894674068" watchObservedRunningTime="2026-01-05 20:42:50.203051214 +0000 UTC m=+2256.912235098" Jan 05 20:42:55 crc kubenswrapper[4754]: I0105 20:42:55.573073 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:55 crc kubenswrapper[4754]: I0105 20:42:55.575751 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:55 crc kubenswrapper[4754]: I0105 20:42:55.658250 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:56 crc kubenswrapper[4754]: I0105 20:42:56.498338 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:56 crc kubenswrapper[4754]: I0105 20:42:56.547643 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:42:58 crc kubenswrapper[4754]: I0105 20:42:58.465078 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwd56" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="registry-server" containerID="cri-o://19ac6a3323964f86699356bc73f761fde2369ad145a3466c525beb97635b36f8" gracePeriod=2 Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.478325 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerDied","Data":"19ac6a3323964f86699356bc73f761fde2369ad145a3466c525beb97635b36f8"} Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.478332 4754 generic.go:334] "Generic (PLEG): container finished" podID="02249eac-c3b0-4c40-a537-832337f4cff6" containerID="19ac6a3323964f86699356bc73f761fde2369ad145a3466c525beb97635b36f8" exitCode=0 Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.478657 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwd56" event={"ID":"02249eac-c3b0-4c40-a537-832337f4cff6","Type":"ContainerDied","Data":"30bf1cab14ab6403e5af4320c169c69de72de48624de9dda61e909c5c08979de"} Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.478670 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30bf1cab14ab6403e5af4320c169c69de72de48624de9dda61e909c5c08979de" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.520495 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.524955 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content\") pod \"02249eac-c3b0-4c40-a537-832337f4cff6\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.525144 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities\") pod \"02249eac-c3b0-4c40-a537-832337f4cff6\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.525437 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jct6\" (UniqueName: \"kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6\") pod \"02249eac-c3b0-4c40-a537-832337f4cff6\" (UID: \"02249eac-c3b0-4c40-a537-832337f4cff6\") " Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.526818 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities" (OuterVolumeSpecName: "utilities") pod "02249eac-c3b0-4c40-a537-832337f4cff6" (UID: "02249eac-c3b0-4c40-a537-832337f4cff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.532704 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6" (OuterVolumeSpecName: "kube-api-access-8jct6") pod "02249eac-c3b0-4c40-a537-832337f4cff6" (UID: "02249eac-c3b0-4c40-a537-832337f4cff6"). InnerVolumeSpecName "kube-api-access-8jct6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.570955 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02249eac-c3b0-4c40-a537-832337f4cff6" (UID: "02249eac-c3b0-4c40-a537-832337f4cff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.634015 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jct6\" (UniqueName: \"kubernetes.io/projected/02249eac-c3b0-4c40-a537-832337f4cff6-kube-api-access-8jct6\") on node \"crc\" DevicePath \"\"" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.634066 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:42:59 crc kubenswrapper[4754]: I0105 20:42:59.634081 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02249eac-c3b0-4c40-a537-832337f4cff6-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:00 crc kubenswrapper[4754]: I0105 20:43:00.488814 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwd56" Jan 05 20:43:00 crc kubenswrapper[4754]: I0105 20:43:00.518186 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:43:00 crc kubenswrapper[4754]: I0105 20:43:00.534338 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwd56"] Jan 05 20:43:01 crc kubenswrapper[4754]: I0105 20:43:01.589325 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:43:01 crc kubenswrapper[4754]: E0105 20:43:01.589746 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:43:01 crc kubenswrapper[4754]: I0105 20:43:01.610064 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" path="/var/lib/kubelet/pods/02249eac-c3b0-4c40-a537-832337f4cff6/volumes" Jan 05 20:43:07 crc kubenswrapper[4754]: I0105 20:43:07.051272 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-w78td"] Jan 05 20:43:07 crc kubenswrapper[4754]: I0105 20:43:07.065579 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-w78td"] Jan 05 20:43:07 crc kubenswrapper[4754]: I0105 20:43:07.634055 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc297f2-024c-45f7-97f5-7c360061a2c2" path="/var/lib/kubelet/pods/4bc297f2-024c-45f7-97f5-7c360061a2c2/volumes" Jan 05 20:43:09 crc kubenswrapper[4754]: I0105 20:43:09.478891 4754 scope.go:117] "RemoveContainer" containerID="856a77bc69e2a0060d199de59976b99551fdf13ac1799133226c91ff4851cd00" Jan 05 20:43:16 crc kubenswrapper[4754]: I0105 20:43:16.588892 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:43:16 crc kubenswrapper[4754]: E0105 20:43:16.589819 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:43:30 crc kubenswrapper[4754]: I0105 20:43:30.589314 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:43:30 crc kubenswrapper[4754]: E0105 20:43:30.590141 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:43:35 crc kubenswrapper[4754]: I0105 20:43:35.917809 4754 generic.go:334] "Generic (PLEG): container finished" podID="127f9233-b76e-47f9-bf65-be5a29ed4c79" containerID="98ca9bc2ec3e150c1d7b27e5a19c21aa597b7e6cd5ff757037c48b102aa57dad" exitCode=0 Jan 05 20:43:35 crc kubenswrapper[4754]: I0105 20:43:35.917925 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" event={"ID":"127f9233-b76e-47f9-bf65-be5a29ed4c79","Type":"ContainerDied","Data":"98ca9bc2ec3e150c1d7b27e5a19c21aa597b7e6cd5ff757037c48b102aa57dad"} Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.473721 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.581481 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key\") pod \"127f9233-b76e-47f9-bf65-be5a29ed4c79\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.581591 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsmp6\" (UniqueName: \"kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6\") pod \"127f9233-b76e-47f9-bf65-be5a29ed4c79\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.581739 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory\") pod \"127f9233-b76e-47f9-bf65-be5a29ed4c79\" (UID: \"127f9233-b76e-47f9-bf65-be5a29ed4c79\") " Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.595305 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6" (OuterVolumeSpecName: "kube-api-access-hsmp6") pod "127f9233-b76e-47f9-bf65-be5a29ed4c79" (UID: "127f9233-b76e-47f9-bf65-be5a29ed4c79"). InnerVolumeSpecName "kube-api-access-hsmp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.639968 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "127f9233-b76e-47f9-bf65-be5a29ed4c79" (UID: "127f9233-b76e-47f9-bf65-be5a29ed4c79"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.640578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory" (OuterVolumeSpecName: "inventory") pod "127f9233-b76e-47f9-bf65-be5a29ed4c79" (UID: "127f9233-b76e-47f9-bf65-be5a29ed4c79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.685028 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.685084 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/127f9233-b76e-47f9-bf65-be5a29ed4c79-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.685098 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsmp6\" (UniqueName: \"kubernetes.io/projected/127f9233-b76e-47f9-bf65-be5a29ed4c79-kube-api-access-hsmp6\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.953560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" event={"ID":"127f9233-b76e-47f9-bf65-be5a29ed4c79","Type":"ContainerDied","Data":"c790ffb7627a3c8e9c5e3f82942742bbdf23c88c606109f729eaca6f8327a8f2"} Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.953602 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c790ffb7627a3c8e9c5e3f82942742bbdf23c88c606109f729eaca6f8327a8f2" Jan 05 20:43:37 crc kubenswrapper[4754]: I0105 20:43:37.953662 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-927hg" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.055509 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vpqzr"] Jan 05 20:43:38 crc kubenswrapper[4754]: E0105 20:43:38.056109 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127f9233-b76e-47f9-bf65-be5a29ed4c79" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056145 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="127f9233-b76e-47f9-bf65-be5a29ed4c79" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:43:38 crc kubenswrapper[4754]: E0105 20:43:38.056194 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="extract-content" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056235 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="extract-content" Jan 05 20:43:38 crc kubenswrapper[4754]: E0105 20:43:38.056389 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="registry-server" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056403 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="registry-server" Jan 05 20:43:38 crc kubenswrapper[4754]: E0105 20:43:38.056435 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="extract-utilities" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056446 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="extract-utilities" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056824 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="127f9233-b76e-47f9-bf65-be5a29ed4c79" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.056858 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="02249eac-c3b0-4c40-a537-832337f4cff6" containerName="registry-server" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.060800 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.063123 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.063978 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.064027 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.067655 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vpqzr"] Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.071687 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.198084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pd65\" (UniqueName: \"kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.198491 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.198825 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.302007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.302132 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pd65\" (UniqueName: \"kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.302214 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.310686 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.311205 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.320765 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pd65\" (UniqueName: \"kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65\") pod \"ssh-known-hosts-edpm-deployment-vpqzr\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.384980 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.947505 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vpqzr"] Jan 05 20:43:38 crc kubenswrapper[4754]: I0105 20:43:38.966427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" event={"ID":"1b846782-fabe-45fc-94d9-f2d0f5a2c008","Type":"ContainerStarted","Data":"88cf05018339008ae0603e0c54551a0a21a65fa838ec1015b5660e62788acb71"} Jan 05 20:43:40 crc kubenswrapper[4754]: I0105 20:43:40.995818 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" event={"ID":"1b846782-fabe-45fc-94d9-f2d0f5a2c008","Type":"ContainerStarted","Data":"86f54361a07bcea264af03df5287305515087da41ccd95b872b168f81f78948c"} Jan 05 20:43:41 crc kubenswrapper[4754]: I0105 20:43:41.025875 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" podStartSLOduration=2.174767809 podStartE2EDuration="3.025853341s" podCreationTimestamp="2026-01-05 20:43:38 +0000 UTC" firstStartedPulling="2026-01-05 20:43:38.950064889 +0000 UTC m=+2305.659248763" lastFinishedPulling="2026-01-05 20:43:39.801150381 +0000 UTC m=+2306.510334295" observedRunningTime="2026-01-05 20:43:41.013815835 +0000 UTC m=+2307.722999699" watchObservedRunningTime="2026-01-05 20:43:41.025853341 +0000 UTC m=+2307.735037235" Jan 05 20:43:42 crc kubenswrapper[4754]: I0105 20:43:42.588654 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:43:42 crc kubenswrapper[4754]: E0105 20:43:42.589364 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:43:46 crc kubenswrapper[4754]: I0105 20:43:46.064805 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-48pbv"] Jan 05 20:43:46 crc kubenswrapper[4754]: I0105 20:43:46.079052 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-48pbv"] Jan 05 20:43:47 crc kubenswrapper[4754]: I0105 20:43:47.607372 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b32de6d-d71d-4c68-9078-32e6fbdb6f37" path="/var/lib/kubelet/pods/2b32de6d-d71d-4c68-9078-32e6fbdb6f37/volumes" Jan 05 20:43:48 crc kubenswrapper[4754]: I0105 20:43:48.087135 4754 generic.go:334] "Generic (PLEG): container finished" podID="1b846782-fabe-45fc-94d9-f2d0f5a2c008" containerID="86f54361a07bcea264af03df5287305515087da41ccd95b872b168f81f78948c" exitCode=0 Jan 05 20:43:48 crc kubenswrapper[4754]: I0105 20:43:48.087225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" event={"ID":"1b846782-fabe-45fc-94d9-f2d0f5a2c008","Type":"ContainerDied","Data":"86f54361a07bcea264af03df5287305515087da41ccd95b872b168f81f78948c"} Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.780731 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.794346 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0\") pod \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.797692 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pd65\" (UniqueName: \"kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65\") pod \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.797820 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam\") pod \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\" (UID: \"1b846782-fabe-45fc-94d9-f2d0f5a2c008\") " Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.803189 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65" (OuterVolumeSpecName: "kube-api-access-2pd65") pod "1b846782-fabe-45fc-94d9-f2d0f5a2c008" (UID: "1b846782-fabe-45fc-94d9-f2d0f5a2c008"). InnerVolumeSpecName "kube-api-access-2pd65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.835685 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b846782-fabe-45fc-94d9-f2d0f5a2c008" (UID: "1b846782-fabe-45fc-94d9-f2d0f5a2c008"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.852774 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1b846782-fabe-45fc-94d9-f2d0f5a2c008" (UID: "1b846782-fabe-45fc-94d9-f2d0f5a2c008"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.902179 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pd65\" (UniqueName: \"kubernetes.io/projected/1b846782-fabe-45fc-94d9-f2d0f5a2c008-kube-api-access-2pd65\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.902214 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:49 crc kubenswrapper[4754]: I0105 20:43:49.902228 4754 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1b846782-fabe-45fc-94d9-f2d0f5a2c008-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.153668 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" event={"ID":"1b846782-fabe-45fc-94d9-f2d0f5a2c008","Type":"ContainerDied","Data":"88cf05018339008ae0603e0c54551a0a21a65fa838ec1015b5660e62788acb71"} Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.153741 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cf05018339008ae0603e0c54551a0a21a65fa838ec1015b5660e62788acb71" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.153858 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vpqzr" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.274085 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r"] Jan 05 20:43:50 crc kubenswrapper[4754]: E0105 20:43:50.275223 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b846782-fabe-45fc-94d9-f2d0f5a2c008" containerName="ssh-known-hosts-edpm-deployment" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.275406 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b846782-fabe-45fc-94d9-f2d0f5a2c008" containerName="ssh-known-hosts-edpm-deployment" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.276058 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b846782-fabe-45fc-94d9-f2d0f5a2c008" containerName="ssh-known-hosts-edpm-deployment" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.277767 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.281217 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.281498 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.283470 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.284691 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.287546 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r"] Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.312570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.312664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd6nh\" (UniqueName: \"kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.312717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.413546 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.413737 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.413793 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd6nh\" (UniqueName: \"kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.430869 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.431824 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.433088 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd6nh\" (UniqueName: \"kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k6k5r\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:50 crc kubenswrapper[4754]: I0105 20:43:50.613077 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:43:51 crc kubenswrapper[4754]: I0105 20:43:51.357977 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r"] Jan 05 20:43:52 crc kubenswrapper[4754]: I0105 20:43:52.174553 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" event={"ID":"f16a69e1-0859-46f4-97f9-a825b629df09","Type":"ContainerStarted","Data":"d9d152035fbacee2f24c803ca6bd64437cbd8cc602a61b23e4e20e075b98ecdd"} Jan 05 20:43:53 crc kubenswrapper[4754]: I0105 20:43:53.184280 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" event={"ID":"f16a69e1-0859-46f4-97f9-a825b629df09","Type":"ContainerStarted","Data":"22fe161e54a84f1a272f3f1436c8de1f7d62b2177f4b50c955ff2cd576e13b6d"} Jan 05 20:43:53 crc kubenswrapper[4754]: I0105 20:43:53.204145 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" podStartSLOduration=2.518092887 podStartE2EDuration="3.204125673s" podCreationTimestamp="2026-01-05 20:43:50 +0000 UTC" firstStartedPulling="2026-01-05 20:43:51.364945686 +0000 UTC m=+2318.074129580" lastFinishedPulling="2026-01-05 20:43:52.050978492 +0000 UTC m=+2318.760162366" observedRunningTime="2026-01-05 20:43:53.198179817 +0000 UTC m=+2319.907363691" watchObservedRunningTime="2026-01-05 20:43:53.204125673 +0000 UTC m=+2319.913309547" Jan 05 20:43:54 crc kubenswrapper[4754]: I0105 20:43:54.589038 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:43:54 crc kubenswrapper[4754]: E0105 20:43:54.589563 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:44:03 crc kubenswrapper[4754]: I0105 20:44:03.309368 4754 generic.go:334] "Generic (PLEG): container finished" podID="f16a69e1-0859-46f4-97f9-a825b629df09" containerID="22fe161e54a84f1a272f3f1436c8de1f7d62b2177f4b50c955ff2cd576e13b6d" exitCode=0 Jan 05 20:44:03 crc kubenswrapper[4754]: I0105 20:44:03.310114 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" event={"ID":"f16a69e1-0859-46f4-97f9-a825b629df09","Type":"ContainerDied","Data":"22fe161e54a84f1a272f3f1436c8de1f7d62b2177f4b50c955ff2cd576e13b6d"} Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.884982 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.947770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key\") pod \"f16a69e1-0859-46f4-97f9-a825b629df09\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.947935 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd6nh\" (UniqueName: \"kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh\") pod \"f16a69e1-0859-46f4-97f9-a825b629df09\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.948212 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory\") pod \"f16a69e1-0859-46f4-97f9-a825b629df09\" (UID: \"f16a69e1-0859-46f4-97f9-a825b629df09\") " Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.957688 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh" (OuterVolumeSpecName: "kube-api-access-jd6nh") pod "f16a69e1-0859-46f4-97f9-a825b629df09" (UID: "f16a69e1-0859-46f4-97f9-a825b629df09"). InnerVolumeSpecName "kube-api-access-jd6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:44:04 crc kubenswrapper[4754]: I0105 20:44:04.987552 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory" (OuterVolumeSpecName: "inventory") pod "f16a69e1-0859-46f4-97f9-a825b629df09" (UID: "f16a69e1-0859-46f4-97f9-a825b629df09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.007322 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f16a69e1-0859-46f4-97f9-a825b629df09" (UID: "f16a69e1-0859-46f4-97f9-a825b629df09"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.051502 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.051540 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f16a69e1-0859-46f4-97f9-a825b629df09-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.051554 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd6nh\" (UniqueName: \"kubernetes.io/projected/f16a69e1-0859-46f4-97f9-a825b629df09-kube-api-access-jd6nh\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.332869 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" event={"ID":"f16a69e1-0859-46f4-97f9-a825b629df09","Type":"ContainerDied","Data":"d9d152035fbacee2f24c803ca6bd64437cbd8cc602a61b23e4e20e075b98ecdd"} Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.332922 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d152035fbacee2f24c803ca6bd64437cbd8cc602a61b23e4e20e075b98ecdd" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.333155 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k6k5r" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.425733 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj"] Jan 05 20:44:05 crc kubenswrapper[4754]: E0105 20:44:05.426477 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16a69e1-0859-46f4-97f9-a825b629df09" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.426496 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16a69e1-0859-46f4-97f9-a825b629df09" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.426732 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f16a69e1-0859-46f4-97f9-a825b629df09" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.427539 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.430144 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.430699 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.431788 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.437479 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.469440 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj"] Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.563465 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngfld\" (UniqueName: \"kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.563511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.563834 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.665849 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.666204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngfld\" (UniqueName: \"kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.666261 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.672459 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.686033 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.689631 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngfld\" (UniqueName: \"kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:05 crc kubenswrapper[4754]: I0105 20:44:05.756169 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:06 crc kubenswrapper[4754]: I0105 20:44:06.322486 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj"] Jan 05 20:44:06 crc kubenswrapper[4754]: I0105 20:44:06.345183 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" event={"ID":"fe969a28-4f43-4936-9859-b73513ec8a50","Type":"ContainerStarted","Data":"928dd98c10e69af0d11604de81f2f3e869cb372e79e7dfef661a09d9fa5325d2"} Jan 05 20:44:07 crc kubenswrapper[4754]: I0105 20:44:07.360772 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" event={"ID":"fe969a28-4f43-4936-9859-b73513ec8a50","Type":"ContainerStarted","Data":"583137635d3dfa6b760d2df27ae4646b21f8174b25708d66ab78a9a667e4fd10"} Jan 05 20:44:07 crc kubenswrapper[4754]: I0105 20:44:07.400153 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" podStartSLOduration=1.9674226620000002 podStartE2EDuration="2.400121601s" podCreationTimestamp="2026-01-05 20:44:05 +0000 UTC" firstStartedPulling="2026-01-05 20:44:06.322021533 +0000 UTC m=+2333.031205407" lastFinishedPulling="2026-01-05 20:44:06.754720482 +0000 UTC m=+2333.463904346" observedRunningTime="2026-01-05 20:44:07.391825423 +0000 UTC m=+2334.101009327" watchObservedRunningTime="2026-01-05 20:44:07.400121601 +0000 UTC m=+2334.109305545" Jan 05 20:44:08 crc kubenswrapper[4754]: I0105 20:44:08.588467 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:44:08 crc kubenswrapper[4754]: E0105 20:44:08.589171 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:44:09 crc kubenswrapper[4754]: I0105 20:44:09.584502 4754 scope.go:117] "RemoveContainer" containerID="65b730c4aa36d59540ec344235231163e28f5f2bacefed511ced75dd9f9d3131" Jan 05 20:44:17 crc kubenswrapper[4754]: I0105 20:44:17.467579 4754 generic.go:334] "Generic (PLEG): container finished" podID="fe969a28-4f43-4936-9859-b73513ec8a50" containerID="583137635d3dfa6b760d2df27ae4646b21f8174b25708d66ab78a9a667e4fd10" exitCode=0 Jan 05 20:44:17 crc kubenswrapper[4754]: I0105 20:44:17.467654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" event={"ID":"fe969a28-4f43-4936-9859-b73513ec8a50","Type":"ContainerDied","Data":"583137635d3dfa6b760d2df27ae4646b21f8174b25708d66ab78a9a667e4fd10"} Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.080923 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.163202 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngfld\" (UniqueName: \"kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld\") pod \"fe969a28-4f43-4936-9859-b73513ec8a50\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.163647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key\") pod \"fe969a28-4f43-4936-9859-b73513ec8a50\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.163689 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory\") pod \"fe969a28-4f43-4936-9859-b73513ec8a50\" (UID: \"fe969a28-4f43-4936-9859-b73513ec8a50\") " Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.168487 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld" (OuterVolumeSpecName: "kube-api-access-ngfld") pod "fe969a28-4f43-4936-9859-b73513ec8a50" (UID: "fe969a28-4f43-4936-9859-b73513ec8a50"). InnerVolumeSpecName "kube-api-access-ngfld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.193050 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fe969a28-4f43-4936-9859-b73513ec8a50" (UID: "fe969a28-4f43-4936-9859-b73513ec8a50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.215584 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory" (OuterVolumeSpecName: "inventory") pod "fe969a28-4f43-4936-9859-b73513ec8a50" (UID: "fe969a28-4f43-4936-9859-b73513ec8a50"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.267444 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.267475 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe969a28-4f43-4936-9859-b73513ec8a50-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.267485 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngfld\" (UniqueName: \"kubernetes.io/projected/fe969a28-4f43-4936-9859-b73513ec8a50-kube-api-access-ngfld\") on node \"crc\" DevicePath \"\"" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.493942 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" event={"ID":"fe969a28-4f43-4936-9859-b73513ec8a50","Type":"ContainerDied","Data":"928dd98c10e69af0d11604de81f2f3e869cb372e79e7dfef661a09d9fa5325d2"} Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.493983 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928dd98c10e69af0d11604de81f2f3e869cb372e79e7dfef661a09d9fa5325d2" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.493998 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.665866 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn"] Jan 05 20:44:19 crc kubenswrapper[4754]: E0105 20:44:19.666552 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe969a28-4f43-4936-9859-b73513ec8a50" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.666579 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe969a28-4f43-4936-9859-b73513ec8a50" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.666872 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe969a28-4f43-4936-9859-b73513ec8a50" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.667929 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674117 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674311 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674468 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674574 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674740 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.675400 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.675503 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.675653 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.674234 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.695747 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn"] Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.781745 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.781885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.781921 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782137 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782208 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782266 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782421 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782519 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782551 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4smpx\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782599 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782711 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782741 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782858 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782902 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782931 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.782957 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885437 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885599 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885714 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885777 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4smpx\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885875 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.885970 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.886079 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887023 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887210 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887271 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887448 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887613 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887670 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887922 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.887995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.889633 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.889710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.891917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.892150 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.892576 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.893726 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.893792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.894824 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.895959 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.895973 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.896145 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.896176 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.897323 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.897732 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.898826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.906154 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4smpx\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-stjtn\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:19 crc kubenswrapper[4754]: I0105 20:44:19.990865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:44:20 crc kubenswrapper[4754]: I0105 20:44:20.603663 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn"] Jan 05 20:44:21 crc kubenswrapper[4754]: I0105 20:44:21.514737 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" event={"ID":"b2149289-5801-4a15-a8b8-4aedc5dd32ed","Type":"ContainerStarted","Data":"961732b1a33b08be4bffeddcb28216db971ecaac7dfc8265e80472d1c8eb789d"} Jan 05 20:44:21 crc kubenswrapper[4754]: I0105 20:44:21.515027 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" event={"ID":"b2149289-5801-4a15-a8b8-4aedc5dd32ed","Type":"ContainerStarted","Data":"44cf4b2692ca47fef6a9ccec170f7d9e47fdc63474b44c87a1780ec05b0108b6"} Jan 05 20:44:21 crc kubenswrapper[4754]: I0105 20:44:21.539764 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" podStartSLOduration=2.052471684 podStartE2EDuration="2.539747767s" podCreationTimestamp="2026-01-05 20:44:19 +0000 UTC" firstStartedPulling="2026-01-05 20:44:20.597506639 +0000 UTC m=+2347.306690513" lastFinishedPulling="2026-01-05 20:44:21.084782722 +0000 UTC m=+2347.793966596" observedRunningTime="2026-01-05 20:44:21.536459321 +0000 UTC m=+2348.245643195" watchObservedRunningTime="2026-01-05 20:44:21.539747767 +0000 UTC m=+2348.248931641" Jan 05 20:44:22 crc kubenswrapper[4754]: I0105 20:44:22.589118 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:44:22 crc kubenswrapper[4754]: E0105 20:44:22.589619 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:44:36 crc kubenswrapper[4754]: I0105 20:44:36.588357 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:44:36 crc kubenswrapper[4754]: E0105 20:44:36.589150 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:44:48 crc kubenswrapper[4754]: I0105 20:44:48.589447 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:44:48 crc kubenswrapper[4754]: E0105 20:44:48.590095 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.175285 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q"] Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.179279 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.182556 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.183525 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.212870 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q"] Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.306672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.306781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.307016 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr2s\" (UniqueName: \"kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.411121 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr2s\" (UniqueName: \"kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.411976 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.412231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.412861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.424134 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.439229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr2s\" (UniqueName: \"kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s\") pod \"collect-profiles-29460765-gxk2q\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:00 crc kubenswrapper[4754]: I0105 20:45:00.530225 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:01 crc kubenswrapper[4754]: I0105 20:45:01.063681 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q"] Jan 05 20:45:01 crc kubenswrapper[4754]: I0105 20:45:01.991512 4754 generic.go:334] "Generic (PLEG): container finished" podID="589bc1bf-4794-43a6-b5a5-a005272ca784" containerID="f0c63e7bd8c035c7396b50e76a47af2188657d323ef8302cbd519808b7303a5e" exitCode=0 Jan 05 20:45:01 crc kubenswrapper[4754]: I0105 20:45:01.991663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" event={"ID":"589bc1bf-4794-43a6-b5a5-a005272ca784","Type":"ContainerDied","Data":"f0c63e7bd8c035c7396b50e76a47af2188657d323ef8302cbd519808b7303a5e"} Jan 05 20:45:01 crc kubenswrapper[4754]: I0105 20:45:01.991842 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" event={"ID":"589bc1bf-4794-43a6-b5a5-a005272ca784","Type":"ContainerStarted","Data":"2745658dd6a65f4f033b414acae188e29aec348856aa5a12c1003b28249bd372"} Jan 05 20:45:02 crc kubenswrapper[4754]: I0105 20:45:02.598380 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:45:02 crc kubenswrapper[4754]: E0105 20:45:02.613085 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.479549 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.627053 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvr2s\" (UniqueName: \"kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s\") pod \"589bc1bf-4794-43a6-b5a5-a005272ca784\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.627475 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume\") pod \"589bc1bf-4794-43a6-b5a5-a005272ca784\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.627676 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume\") pod \"589bc1bf-4794-43a6-b5a5-a005272ca784\" (UID: \"589bc1bf-4794-43a6-b5a5-a005272ca784\") " Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.629228 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume" (OuterVolumeSpecName: "config-volume") pod "589bc1bf-4794-43a6-b5a5-a005272ca784" (UID: "589bc1bf-4794-43a6-b5a5-a005272ca784"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.636023 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "589bc1bf-4794-43a6-b5a5-a005272ca784" (UID: "589bc1bf-4794-43a6-b5a5-a005272ca784"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.636074 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s" (OuterVolumeSpecName: "kube-api-access-gvr2s") pod "589bc1bf-4794-43a6-b5a5-a005272ca784" (UID: "589bc1bf-4794-43a6-b5a5-a005272ca784"). InnerVolumeSpecName "kube-api-access-gvr2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.733943 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvr2s\" (UniqueName: \"kubernetes.io/projected/589bc1bf-4794-43a6-b5a5-a005272ca784-kube-api-access-gvr2s\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.734080 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bc1bf-4794-43a6-b5a5-a005272ca784-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:03 crc kubenswrapper[4754]: I0105 20:45:03.734111 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589bc1bf-4794-43a6-b5a5-a005272ca784-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:04 crc kubenswrapper[4754]: I0105 20:45:04.022804 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" event={"ID":"589bc1bf-4794-43a6-b5a5-a005272ca784","Type":"ContainerDied","Data":"2745658dd6a65f4f033b414acae188e29aec348856aa5a12c1003b28249bd372"} Jan 05 20:45:04 crc kubenswrapper[4754]: I0105 20:45:04.022853 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q" Jan 05 20:45:04 crc kubenswrapper[4754]: I0105 20:45:04.022862 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2745658dd6a65f4f033b414acae188e29aec348856aa5a12c1003b28249bd372" Jan 05 20:45:04 crc kubenswrapper[4754]: I0105 20:45:04.565412 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc"] Jan 05 20:45:04 crc kubenswrapper[4754]: I0105 20:45:04.575835 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460720-k78dc"] Jan 05 20:45:05 crc kubenswrapper[4754]: I0105 20:45:05.614813 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4b1718-5732-4498-b355-25832e158871" path="/var/lib/kubelet/pods/7e4b1718-5732-4498-b355-25832e158871/volumes" Jan 05 20:45:09 crc kubenswrapper[4754]: I0105 20:45:09.663341 4754 scope.go:117] "RemoveContainer" containerID="76e29dd1ef0898e4411164913182422e55947e5cfa37f54bec2aa41091764da8" Jan 05 20:45:16 crc kubenswrapper[4754]: I0105 20:45:16.163040 4754 generic.go:334] "Generic (PLEG): container finished" podID="b2149289-5801-4a15-a8b8-4aedc5dd32ed" containerID="961732b1a33b08be4bffeddcb28216db971ecaac7dfc8265e80472d1c8eb789d" exitCode=0 Jan 05 20:45:16 crc kubenswrapper[4754]: I0105 20:45:16.163188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" event={"ID":"b2149289-5801-4a15-a8b8-4aedc5dd32ed","Type":"ContainerDied","Data":"961732b1a33b08be4bffeddcb28216db971ecaac7dfc8265e80472d1c8eb789d"} Jan 05 20:45:16 crc kubenswrapper[4754]: I0105 20:45:16.589792 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:45:16 crc kubenswrapper[4754]: E0105 20:45:16.590510 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.672978 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743015 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4smpx\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743088 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743165 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743214 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743276 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743330 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743368 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743398 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743437 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743473 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743496 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743580 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743644 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743697 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743741 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.743773 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\" (UID: \"b2149289-5801-4a15-a8b8-4aedc5dd32ed\") " Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.753813 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.754331 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx" (OuterVolumeSpecName: "kube-api-access-4smpx") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "kube-api-access-4smpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.756145 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.757620 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.757964 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.759752 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.759865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.761112 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.763962 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.764349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.767580 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.767740 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.770072 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.774199 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.790975 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory" (OuterVolumeSpecName: "inventory") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.812423 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b2149289-5801-4a15-a8b8-4aedc5dd32ed" (UID: "b2149289-5801-4a15-a8b8-4aedc5dd32ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847505 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4smpx\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-kube-api-access-4smpx\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847578 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847635 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847649 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847663 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847710 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847722 4754 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847734 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847748 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847785 4754 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847818 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847834 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847912 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847947 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847961 4754 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2149289-5801-4a15-a8b8-4aedc5dd32ed-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:17 crc kubenswrapper[4754]: I0105 20:45:17.847974 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b2149289-5801-4a15-a8b8-4aedc5dd32ed-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.195081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" event={"ID":"b2149289-5801-4a15-a8b8-4aedc5dd32ed","Type":"ContainerDied","Data":"44cf4b2692ca47fef6a9ccec170f7d9e47fdc63474b44c87a1780ec05b0108b6"} Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.195128 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cf4b2692ca47fef6a9ccec170f7d9e47fdc63474b44c87a1780ec05b0108b6" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.195143 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-stjtn" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.346422 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7"] Jan 05 20:45:18 crc kubenswrapper[4754]: E0105 20:45:18.346926 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589bc1bf-4794-43a6-b5a5-a005272ca784" containerName="collect-profiles" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.346947 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="589bc1bf-4794-43a6-b5a5-a005272ca784" containerName="collect-profiles" Jan 05 20:45:18 crc kubenswrapper[4754]: E0105 20:45:18.347005 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2149289-5801-4a15-a8b8-4aedc5dd32ed" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.347018 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2149289-5801-4a15-a8b8-4aedc5dd32ed" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.347309 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="589bc1bf-4794-43a6-b5a5-a005272ca784" containerName="collect-profiles" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.347347 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2149289-5801-4a15-a8b8-4aedc5dd32ed" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.348164 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.352681 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.352924 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.354282 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.354907 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.354966 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.378845 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7"] Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.441223 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.443636 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.474413 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.474486 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjj9h\" (UniqueName: \"kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.474517 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.474616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.474685 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.548353 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.581848 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582011 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582056 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjj9h\" (UniqueName: \"kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582235 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5nv\" (UniqueName: \"kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582270 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.582318 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.583230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.616138 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.630871 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.631553 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.635039 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjj9h\" (UniqueName: \"kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cpvn7\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.665964 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.687676 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5nv\" (UniqueName: \"kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.687741 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.687943 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.688802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.690248 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.719483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5nv\" (UniqueName: \"kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv\") pod \"certified-operators-w4hff\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:18 crc kubenswrapper[4754]: I0105 20:45:18.797215 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:19 crc kubenswrapper[4754]: I0105 20:45:19.394928 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:19 crc kubenswrapper[4754]: W0105 20:45:19.520269 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec60a10_f518_45a1_8b13_375943e8ff65.slice/crio-dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e WatchSource:0}: Error finding container dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e: Status 404 returned error can't find the container with id dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e Jan 05 20:45:19 crc kubenswrapper[4754]: I0105 20:45:19.527411 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7"] Jan 05 20:45:20 crc kubenswrapper[4754]: I0105 20:45:20.230882 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" event={"ID":"aec60a10-f518-45a1-8b13-375943e8ff65","Type":"ContainerStarted","Data":"dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e"} Jan 05 20:45:20 crc kubenswrapper[4754]: I0105 20:45:20.234681 4754 generic.go:334] "Generic (PLEG): container finished" podID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerID="89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5" exitCode=0 Jan 05 20:45:20 crc kubenswrapper[4754]: I0105 20:45:20.234728 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerDied","Data":"89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5"} Jan 05 20:45:20 crc kubenswrapper[4754]: I0105 20:45:20.234940 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerStarted","Data":"90793aa859ff2d11e04f3d255efbbb028e06c6f7c1eb03274e1c39285c3cfae8"} Jan 05 20:45:21 crc kubenswrapper[4754]: I0105 20:45:21.259342 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" event={"ID":"aec60a10-f518-45a1-8b13-375943e8ff65","Type":"ContainerStarted","Data":"adb6c09e190ca41c3585c523af095ead71f6bff00edcd5c195d4532d6f5f87f3"} Jan 05 20:45:21 crc kubenswrapper[4754]: I0105 20:45:21.293158 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" podStartSLOduration=2.680149113 podStartE2EDuration="3.293106189s" podCreationTimestamp="2026-01-05 20:45:18 +0000 UTC" firstStartedPulling="2026-01-05 20:45:19.522724651 +0000 UTC m=+2406.231908525" lastFinishedPulling="2026-01-05 20:45:20.135681687 +0000 UTC m=+2406.844865601" observedRunningTime="2026-01-05 20:45:21.281791841 +0000 UTC m=+2407.990975735" watchObservedRunningTime="2026-01-05 20:45:21.293106189 +0000 UTC m=+2408.002290073" Jan 05 20:45:22 crc kubenswrapper[4754]: I0105 20:45:22.276992 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerStarted","Data":"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3"} Jan 05 20:45:23 crc kubenswrapper[4754]: I0105 20:45:23.293745 4754 generic.go:334] "Generic (PLEG): container finished" podID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerID="6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3" exitCode=0 Jan 05 20:45:23 crc kubenswrapper[4754]: I0105 20:45:23.293809 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerDied","Data":"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3"} Jan 05 20:45:24 crc kubenswrapper[4754]: I0105 20:45:24.306452 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerStarted","Data":"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d"} Jan 05 20:45:24 crc kubenswrapper[4754]: I0105 20:45:24.342668 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4hff" podStartSLOduration=2.774298096 podStartE2EDuration="6.342645757s" podCreationTimestamp="2026-01-05 20:45:18 +0000 UTC" firstStartedPulling="2026-01-05 20:45:20.239005401 +0000 UTC m=+2406.948189275" lastFinishedPulling="2026-01-05 20:45:23.807353032 +0000 UTC m=+2410.516536936" observedRunningTime="2026-01-05 20:45:24.329721937 +0000 UTC m=+2411.038905821" watchObservedRunningTime="2026-01-05 20:45:24.342645757 +0000 UTC m=+2411.051829651" Jan 05 20:45:28 crc kubenswrapper[4754]: I0105 20:45:28.798428 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:28 crc kubenswrapper[4754]: I0105 20:45:28.799204 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:28 crc kubenswrapper[4754]: I0105 20:45:28.866023 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:29 crc kubenswrapper[4754]: I0105 20:45:29.450611 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:29 crc kubenswrapper[4754]: I0105 20:45:29.508716 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:29 crc kubenswrapper[4754]: I0105 20:45:29.590364 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:45:29 crc kubenswrapper[4754]: E0105 20:45:29.592643 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:45:31 crc kubenswrapper[4754]: I0105 20:45:31.389147 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4hff" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="registry-server" containerID="cri-o://5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d" gracePeriod=2 Jan 05 20:45:31 crc kubenswrapper[4754]: I0105 20:45:31.982982 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.069751 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz5nv\" (UniqueName: \"kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv\") pod \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.070224 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content\") pod \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.070334 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities\") pod \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\" (UID: \"b3bebe33-e2b3-4240-bdbb-5974c70fa62f\") " Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.071430 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities" (OuterVolumeSpecName: "utilities") pod "b3bebe33-e2b3-4240-bdbb-5974c70fa62f" (UID: "b3bebe33-e2b3-4240-bdbb-5974c70fa62f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.077385 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv" (OuterVolumeSpecName: "kube-api-access-sz5nv") pod "b3bebe33-e2b3-4240-bdbb-5974c70fa62f" (UID: "b3bebe33-e2b3-4240-bdbb-5974c70fa62f"). InnerVolumeSpecName "kube-api-access-sz5nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.121001 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3bebe33-e2b3-4240-bdbb-5974c70fa62f" (UID: "b3bebe33-e2b3-4240-bdbb-5974c70fa62f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.173896 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz5nv\" (UniqueName: \"kubernetes.io/projected/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-kube-api-access-sz5nv\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.173935 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.173949 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bebe33-e2b3-4240-bdbb-5974c70fa62f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.401493 4754 generic.go:334] "Generic (PLEG): container finished" podID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerID="5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d" exitCode=0 Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.401538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerDied","Data":"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d"} Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.401572 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4hff" event={"ID":"b3bebe33-e2b3-4240-bdbb-5974c70fa62f","Type":"ContainerDied","Data":"90793aa859ff2d11e04f3d255efbbb028e06c6f7c1eb03274e1c39285c3cfae8"} Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.401593 4754 scope.go:117] "RemoveContainer" containerID="5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.401721 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4hff" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.428227 4754 scope.go:117] "RemoveContainer" containerID="6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.437215 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.446876 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4hff"] Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.460638 4754 scope.go:117] "RemoveContainer" containerID="89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.500110 4754 scope.go:117] "RemoveContainer" containerID="5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d" Jan 05 20:45:32 crc kubenswrapper[4754]: E0105 20:45:32.503862 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d\": container with ID starting with 5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d not found: ID does not exist" containerID="5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.503920 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d"} err="failed to get container status \"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d\": rpc error: code = NotFound desc = could not find container \"5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d\": container with ID starting with 5a742f18aea7b64b4b235013d304c919639d0f6967bb552f81e812e021f7be4d not found: ID does not exist" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.503946 4754 scope.go:117] "RemoveContainer" containerID="6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3" Jan 05 20:45:32 crc kubenswrapper[4754]: E0105 20:45:32.504255 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3\": container with ID starting with 6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3 not found: ID does not exist" containerID="6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.504373 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3"} err="failed to get container status \"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3\": rpc error: code = NotFound desc = could not find container \"6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3\": container with ID starting with 6bac39f85db59148532152dfa177f54bf996d7450ba22e7601896c66271c79d3 not found: ID does not exist" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.504574 4754 scope.go:117] "RemoveContainer" containerID="89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5" Jan 05 20:45:32 crc kubenswrapper[4754]: E0105 20:45:32.506439 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5\": container with ID starting with 89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5 not found: ID does not exist" containerID="89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5" Jan 05 20:45:32 crc kubenswrapper[4754]: I0105 20:45:32.506495 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5"} err="failed to get container status \"89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5\": rpc error: code = NotFound desc = could not find container \"89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5\": container with ID starting with 89d5f68244c14d20e9407f5c8283ca13e62c09d3c502b1506a576bd4fc8bc9b5 not found: ID does not exist" Jan 05 20:45:33 crc kubenswrapper[4754]: I0105 20:45:33.606016 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" path="/var/lib/kubelet/pods/b3bebe33-e2b3-4240-bdbb-5974c70fa62f/volumes" Jan 05 20:45:41 crc kubenswrapper[4754]: I0105 20:45:41.588437 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:45:41 crc kubenswrapper[4754]: E0105 20:45:41.589233 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:45:52 crc kubenswrapper[4754]: I0105 20:45:52.589749 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:45:52 crc kubenswrapper[4754]: E0105 20:45:52.591236 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:46:06 crc kubenswrapper[4754]: I0105 20:46:06.588889 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:46:06 crc kubenswrapper[4754]: E0105 20:46:06.589708 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:46:21 crc kubenswrapper[4754]: I0105 20:46:21.589821 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:46:21 crc kubenswrapper[4754]: E0105 20:46:21.591089 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:46:33 crc kubenswrapper[4754]: I0105 20:46:33.599154 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:46:33 crc kubenswrapper[4754]: E0105 20:46:33.600127 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:46:39 crc kubenswrapper[4754]: I0105 20:46:39.255030 4754 generic.go:334] "Generic (PLEG): container finished" podID="aec60a10-f518-45a1-8b13-375943e8ff65" containerID="adb6c09e190ca41c3585c523af095ead71f6bff00edcd5c195d4532d6f5f87f3" exitCode=0 Jan 05 20:46:39 crc kubenswrapper[4754]: I0105 20:46:39.255123 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" event={"ID":"aec60a10-f518-45a1-8b13-375943e8ff65","Type":"ContainerDied","Data":"adb6c09e190ca41c3585c523af095ead71f6bff00edcd5c195d4532d6f5f87f3"} Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.807168 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.950003 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle\") pod \"aec60a10-f518-45a1-8b13-375943e8ff65\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.950142 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjj9h\" (UniqueName: \"kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h\") pod \"aec60a10-f518-45a1-8b13-375943e8ff65\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.950505 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key\") pod \"aec60a10-f518-45a1-8b13-375943e8ff65\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.950585 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory\") pod \"aec60a10-f518-45a1-8b13-375943e8ff65\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.950677 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0\") pod \"aec60a10-f518-45a1-8b13-375943e8ff65\" (UID: \"aec60a10-f518-45a1-8b13-375943e8ff65\") " Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.957260 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h" (OuterVolumeSpecName: "kube-api-access-sjj9h") pod "aec60a10-f518-45a1-8b13-375943e8ff65" (UID: "aec60a10-f518-45a1-8b13-375943e8ff65"). InnerVolumeSpecName "kube-api-access-sjj9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.957438 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "aec60a10-f518-45a1-8b13-375943e8ff65" (UID: "aec60a10-f518-45a1-8b13-375943e8ff65"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.981922 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "aec60a10-f518-45a1-8b13-375943e8ff65" (UID: "aec60a10-f518-45a1-8b13-375943e8ff65"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.983953 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aec60a10-f518-45a1-8b13-375943e8ff65" (UID: "aec60a10-f518-45a1-8b13-375943e8ff65"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:46:40 crc kubenswrapper[4754]: I0105 20:46:40.989236 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory" (OuterVolumeSpecName: "inventory") pod "aec60a10-f518-45a1-8b13-375943e8ff65" (UID: "aec60a10-f518-45a1-8b13-375943e8ff65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.053030 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.053067 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.053081 4754 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aec60a10-f518-45a1-8b13-375943e8ff65-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.053093 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec60a10-f518-45a1-8b13-375943e8ff65-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.053102 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjj9h\" (UniqueName: \"kubernetes.io/projected/aec60a10-f518-45a1-8b13-375943e8ff65-kube-api-access-sjj9h\") on node \"crc\" DevicePath \"\"" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.284451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" event={"ID":"aec60a10-f518-45a1-8b13-375943e8ff65","Type":"ContainerDied","Data":"dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e"} Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.284834 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7a0a0c98800330148c92e759eaf0203e34f9394c978006c2cdc28b430a1a5e" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.284510 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cpvn7" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.400733 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b"] Jan 05 20:46:41 crc kubenswrapper[4754]: E0105 20:46:41.401434 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="extract-utilities" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.401463 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="extract-utilities" Jan 05 20:46:41 crc kubenswrapper[4754]: E0105 20:46:41.401486 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="extract-content" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.401497 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="extract-content" Jan 05 20:46:41 crc kubenswrapper[4754]: E0105 20:46:41.401568 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec60a10-f518-45a1-8b13-375943e8ff65" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.401581 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec60a10-f518-45a1-8b13-375943e8ff65" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 20:46:41 crc kubenswrapper[4754]: E0105 20:46:41.401601 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="registry-server" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.401613 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="registry-server" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.401976 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec60a10-f518-45a1-8b13-375943e8ff65" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.402043 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bebe33-e2b3-4240-bdbb-5974c70fa62f" containerName="registry-server" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.406903 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.409577 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.413208 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.413834 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.415028 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.417328 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.417647 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.439032 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b"] Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464512 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464643 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464682 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464728 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464788 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.464833 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxvx\" (UniqueName: \"kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.567512 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.567615 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxvx\" (UniqueName: \"kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.567844 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.567990 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.568045 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.568103 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.574978 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.575260 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.576417 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.576751 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.579710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.599844 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxvx\" (UniqueName: \"kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:41 crc kubenswrapper[4754]: I0105 20:46:41.742814 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:46:42 crc kubenswrapper[4754]: I0105 20:46:42.158251 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b"] Jan 05 20:46:42 crc kubenswrapper[4754]: I0105 20:46:42.295860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" event={"ID":"429c47f2-c321-450e-a0ce-2fc72a26f9e3","Type":"ContainerStarted","Data":"0b5c6962d7f2e083d19a4f5ab935bb19788817fd16dee45190ce045b54407bbe"} Jan 05 20:46:43 crc kubenswrapper[4754]: I0105 20:46:43.310603 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" event={"ID":"429c47f2-c321-450e-a0ce-2fc72a26f9e3","Type":"ContainerStarted","Data":"13355fb47288d09a396b8c2083ee5ebe106bf4bf34ea2131212e2ab4a57163d1"} Jan 05 20:46:43 crc kubenswrapper[4754]: I0105 20:46:43.348259 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" podStartSLOduration=1.938909829 podStartE2EDuration="2.348232243s" podCreationTimestamp="2026-01-05 20:46:41 +0000 UTC" firstStartedPulling="2026-01-05 20:46:42.165194279 +0000 UTC m=+2488.874378153" lastFinishedPulling="2026-01-05 20:46:42.574516653 +0000 UTC m=+2489.283700567" observedRunningTime="2026-01-05 20:46:43.335340625 +0000 UTC m=+2490.044524549" watchObservedRunningTime="2026-01-05 20:46:43.348232243 +0000 UTC m=+2490.057416147" Jan 05 20:46:48 crc kubenswrapper[4754]: I0105 20:46:48.589782 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:46:48 crc kubenswrapper[4754]: E0105 20:46:48.591427 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:47:01 crc kubenswrapper[4754]: I0105 20:47:01.588812 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:47:01 crc kubenswrapper[4754]: E0105 20:47:01.589749 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:47:16 crc kubenswrapper[4754]: I0105 20:47:16.588990 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:47:16 crc kubenswrapper[4754]: E0105 20:47:16.590253 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.066958 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.070532 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.104170 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.242184 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgk22\" (UniqueName: \"kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.242567 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.242895 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.344726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.345132 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.345217 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgk22\" (UniqueName: \"kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.345312 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.345690 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.363744 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgk22\" (UniqueName: \"kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22\") pod \"redhat-operators-4mhwx\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.410783 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:20 crc kubenswrapper[4754]: W0105 20:47:20.917475 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d76a46e_21dd_48bc_a91d_85f07441ade8.slice/crio-9168dddc57a46b2802129e2394919399b1de8cbf4f5fa283fe0a79edc2c92df0 WatchSource:0}: Error finding container 9168dddc57a46b2802129e2394919399b1de8cbf4f5fa283fe0a79edc2c92df0: Status 404 returned error can't find the container with id 9168dddc57a46b2802129e2394919399b1de8cbf4f5fa283fe0a79edc2c92df0 Jan 05 20:47:20 crc kubenswrapper[4754]: I0105 20:47:20.926018 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:21 crc kubenswrapper[4754]: I0105 20:47:21.448794 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerID="e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960" exitCode=0 Jan 05 20:47:21 crc kubenswrapper[4754]: I0105 20:47:21.449093 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerDied","Data":"e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960"} Jan 05 20:47:21 crc kubenswrapper[4754]: I0105 20:47:21.449126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerStarted","Data":"9168dddc57a46b2802129e2394919399b1de8cbf4f5fa283fe0a79edc2c92df0"} Jan 05 20:47:23 crc kubenswrapper[4754]: I0105 20:47:23.472657 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerStarted","Data":"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594"} Jan 05 20:47:26 crc kubenswrapper[4754]: I0105 20:47:26.510994 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerID="77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594" exitCode=0 Jan 05 20:47:26 crc kubenswrapper[4754]: I0105 20:47:26.511075 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerDied","Data":"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594"} Jan 05 20:47:27 crc kubenswrapper[4754]: I0105 20:47:27.523656 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerStarted","Data":"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6"} Jan 05 20:47:27 crc kubenswrapper[4754]: I0105 20:47:27.562266 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4mhwx" podStartSLOduration=1.928801719 podStartE2EDuration="7.5622445s" podCreationTimestamp="2026-01-05 20:47:20 +0000 UTC" firstStartedPulling="2026-01-05 20:47:21.451536548 +0000 UTC m=+2528.160720422" lastFinishedPulling="2026-01-05 20:47:27.084979309 +0000 UTC m=+2533.794163203" observedRunningTime="2026-01-05 20:47:27.548649513 +0000 UTC m=+2534.257833397" watchObservedRunningTime="2026-01-05 20:47:27.5622445 +0000 UTC m=+2534.271428374" Jan 05 20:47:27 crc kubenswrapper[4754]: I0105 20:47:27.589660 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:47:28 crc kubenswrapper[4754]: I0105 20:47:28.544779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d"} Jan 05 20:47:30 crc kubenswrapper[4754]: I0105 20:47:30.412193 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:30 crc kubenswrapper[4754]: I0105 20:47:30.412786 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:31 crc kubenswrapper[4754]: I0105 20:47:31.475298 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4mhwx" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="registry-server" probeResult="failure" output=< Jan 05 20:47:31 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:47:31 crc kubenswrapper[4754]: > Jan 05 20:47:40 crc kubenswrapper[4754]: I0105 20:47:40.471143 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:40 crc kubenswrapper[4754]: I0105 20:47:40.538999 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:40 crc kubenswrapper[4754]: I0105 20:47:40.938928 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:41 crc kubenswrapper[4754]: I0105 20:47:41.710284 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4mhwx" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="registry-server" containerID="cri-o://f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6" gracePeriod=2 Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.252114 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.327197 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities\") pod \"9d76a46e-21dd-48bc-a91d-85f07441ade8\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.327401 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgk22\" (UniqueName: \"kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22\") pod \"9d76a46e-21dd-48bc-a91d-85f07441ade8\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.327746 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content\") pod \"9d76a46e-21dd-48bc-a91d-85f07441ade8\" (UID: \"9d76a46e-21dd-48bc-a91d-85f07441ade8\") " Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.328235 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities" (OuterVolumeSpecName: "utilities") pod "9d76a46e-21dd-48bc-a91d-85f07441ade8" (UID: "9d76a46e-21dd-48bc-a91d-85f07441ade8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.329619 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.333491 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22" (OuterVolumeSpecName: "kube-api-access-lgk22") pod "9d76a46e-21dd-48bc-a91d-85f07441ade8" (UID: "9d76a46e-21dd-48bc-a91d-85f07441ade8"). InnerVolumeSpecName "kube-api-access-lgk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.432868 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgk22\" (UniqueName: \"kubernetes.io/projected/9d76a46e-21dd-48bc-a91d-85f07441ade8-kube-api-access-lgk22\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.467754 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d76a46e-21dd-48bc-a91d-85f07441ade8" (UID: "9d76a46e-21dd-48bc-a91d-85f07441ade8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.535199 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d76a46e-21dd-48bc-a91d-85f07441ade8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.728261 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerID="f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6" exitCode=0 Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.728567 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mhwx" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.728478 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerDied","Data":"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6"} Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.728711 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mhwx" event={"ID":"9d76a46e-21dd-48bc-a91d-85f07441ade8","Type":"ContainerDied","Data":"9168dddc57a46b2802129e2394919399b1de8cbf4f5fa283fe0a79edc2c92df0"} Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.728742 4754 scope.go:117] "RemoveContainer" containerID="f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.774788 4754 scope.go:117] "RemoveContainer" containerID="77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.777202 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.804526 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4mhwx"] Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.815082 4754 scope.go:117] "RemoveContainer" containerID="e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.867741 4754 scope.go:117] "RemoveContainer" containerID="f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6" Jan 05 20:47:42 crc kubenswrapper[4754]: E0105 20:47:42.868226 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6\": container with ID starting with f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6 not found: ID does not exist" containerID="f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.868264 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6"} err="failed to get container status \"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6\": rpc error: code = NotFound desc = could not find container \"f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6\": container with ID starting with f200694a66011df627f5218a1323817ba93fc886768102503e8ce5886a035dc6 not found: ID does not exist" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.868321 4754 scope.go:117] "RemoveContainer" containerID="77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594" Jan 05 20:47:42 crc kubenswrapper[4754]: E0105 20:47:42.868846 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594\": container with ID starting with 77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594 not found: ID does not exist" containerID="77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.868875 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594"} err="failed to get container status \"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594\": rpc error: code = NotFound desc = could not find container \"77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594\": container with ID starting with 77e71ef2a83a00ad16e94f34ed07a984701ada7fdddf45216441a12f7472b594 not found: ID does not exist" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.868899 4754 scope.go:117] "RemoveContainer" containerID="e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960" Jan 05 20:47:42 crc kubenswrapper[4754]: E0105 20:47:42.869185 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960\": container with ID starting with e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960 not found: ID does not exist" containerID="e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960" Jan 05 20:47:42 crc kubenswrapper[4754]: I0105 20:47:42.869226 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960"} err="failed to get container status \"e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960\": rpc error: code = NotFound desc = could not find container \"e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960\": container with ID starting with e022dca3f675e05686786ded4d8befbd35b1d326a03174d22ea99c96200f2960 not found: ID does not exist" Jan 05 20:47:43 crc kubenswrapper[4754]: I0105 20:47:43.607333 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" path="/var/lib/kubelet/pods/9d76a46e-21dd-48bc-a91d-85f07441ade8/volumes" Jan 05 20:47:44 crc kubenswrapper[4754]: I0105 20:47:44.770180 4754 generic.go:334] "Generic (PLEG): container finished" podID="429c47f2-c321-450e-a0ce-2fc72a26f9e3" containerID="13355fb47288d09a396b8c2083ee5ebe106bf4bf34ea2131212e2ab4a57163d1" exitCode=0 Jan 05 20:47:44 crc kubenswrapper[4754]: I0105 20:47:44.770396 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" event={"ID":"429c47f2-c321-450e-a0ce-2fc72a26f9e3","Type":"ContainerDied","Data":"13355fb47288d09a396b8c2083ee5ebe106bf4bf34ea2131212e2ab4a57163d1"} Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.248431 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432511 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjxvx\" (UniqueName: \"kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432570 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432653 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432794 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.432923 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0\") pod \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\" (UID: \"429c47f2-c321-450e-a0ce-2fc72a26f9e3\") " Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.443456 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx" (OuterVolumeSpecName: "kube-api-access-mjxvx") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "kube-api-access-mjxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.458544 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.476518 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.481054 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.485248 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.490491 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory" (OuterVolumeSpecName: "inventory") pod "429c47f2-c321-450e-a0ce-2fc72a26f9e3" (UID: "429c47f2-c321-450e-a0ce-2fc72a26f9e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537312 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537793 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjxvx\" (UniqueName: \"kubernetes.io/projected/429c47f2-c321-450e-a0ce-2fc72a26f9e3-kube-api-access-mjxvx\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537807 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537818 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537831 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.537842 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/429c47f2-c321-450e-a0ce-2fc72a26f9e3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.799147 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" event={"ID":"429c47f2-c321-450e-a0ce-2fc72a26f9e3","Type":"ContainerDied","Data":"0b5c6962d7f2e083d19a4f5ab935bb19788817fd16dee45190ce045b54407bbe"} Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.799195 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b5c6962d7f2e083d19a4f5ab935bb19788817fd16dee45190ce045b54407bbe" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.799286 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.928185 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw"] Jan 05 20:47:46 crc kubenswrapper[4754]: E0105 20:47:46.928821 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="extract-utilities" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.928842 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="extract-utilities" Jan 05 20:47:46 crc kubenswrapper[4754]: E0105 20:47:46.928866 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="registry-server" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.928875 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="registry-server" Jan 05 20:47:46 crc kubenswrapper[4754]: E0105 20:47:46.928902 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="extract-content" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.928912 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="extract-content" Jan 05 20:47:46 crc kubenswrapper[4754]: E0105 20:47:46.928942 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429c47f2-c321-450e-a0ce-2fc72a26f9e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.928951 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="429c47f2-c321-450e-a0ce-2fc72a26f9e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.929217 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d76a46e-21dd-48bc-a91d-85f07441ade8" containerName="registry-server" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.929265 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="429c47f2-c321-450e-a0ce-2fc72a26f9e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.930271 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.940912 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw"] Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.973720 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.973827 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.974050 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.974094 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.974474 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.975778 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.975893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6ck\" (UniqueName: \"kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.975979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.976191 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:46 crc kubenswrapper[4754]: I0105 20:47:46.976268 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.078547 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.078841 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v6ck\" (UniqueName: \"kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.078958 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.079116 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.079228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.083057 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.083791 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.084195 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.086765 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.096563 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v6ck\" (UniqueName: \"kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.292648 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.840766 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw"] Jan 05 20:47:47 crc kubenswrapper[4754]: I0105 20:47:47.844727 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:47:48 crc kubenswrapper[4754]: I0105 20:47:48.859427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" event={"ID":"a4f1c57e-be1c-4790-a806-2e23b4324280","Type":"ContainerStarted","Data":"854f7f8bd836caea49f67f13cd4e528f5c855fcd62c734c0f6349eb6ead9a0f1"} Jan 05 20:47:48 crc kubenswrapper[4754]: I0105 20:47:48.859723 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" event={"ID":"a4f1c57e-be1c-4790-a806-2e23b4324280","Type":"ContainerStarted","Data":"3b4c96c54dde224815608bcf7085cc29d18f3d1fcd00729d486ea382c46f2802"} Jan 05 20:47:48 crc kubenswrapper[4754]: I0105 20:47:48.913694 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" podStartSLOduration=2.39103665 podStartE2EDuration="2.913675482s" podCreationTimestamp="2026-01-05 20:47:46 +0000 UTC" firstStartedPulling="2026-01-05 20:47:47.8445296 +0000 UTC m=+2554.553713474" lastFinishedPulling="2026-01-05 20:47:48.367168432 +0000 UTC m=+2555.076352306" observedRunningTime="2026-01-05 20:47:48.907720296 +0000 UTC m=+2555.616904170" watchObservedRunningTime="2026-01-05 20:47:48.913675482 +0000 UTC m=+2555.622859356" Jan 05 20:49:09 crc kubenswrapper[4754]: I0105 20:49:09.864858 4754 scope.go:117] "RemoveContainer" containerID="2d0357cf1398f8c6141ecc64ae1d6084e8fdbfcd9b96c73f6024cc19ab187132" Jan 05 20:49:09 crc kubenswrapper[4754]: I0105 20:49:09.910223 4754 scope.go:117] "RemoveContainer" containerID="19ac6a3323964f86699356bc73f761fde2369ad145a3466c525beb97635b36f8" Jan 05 20:49:09 crc kubenswrapper[4754]: I0105 20:49:09.985844 4754 scope.go:117] "RemoveContainer" containerID="1c0c2f3221dd43b45b047076b86f97800cb9f91caf99a3be1de138aaf668e493" Jan 05 20:49:48 crc kubenswrapper[4754]: I0105 20:49:48.109472 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:49:48 crc kubenswrapper[4754]: I0105 20:49:48.110462 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:50:18 crc kubenswrapper[4754]: I0105 20:50:18.109154 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:50:18 crc kubenswrapper[4754]: I0105 20:50:18.109878 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.109679 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.110702 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.110791 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.112240 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.112394 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d" gracePeriod=600 Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.532858 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d" exitCode=0 Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.532934 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d"} Jan 05 20:50:48 crc kubenswrapper[4754]: I0105 20:50:48.533348 4754 scope.go:117] "RemoveContainer" containerID="bf00dfd553c16afd4905db93af460a6a83ec719b48885233b6e6607dece47e05" Jan 05 20:50:49 crc kubenswrapper[4754]: I0105 20:50:49.546584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d"} Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.400676 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.405635 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.445827 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.521829 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftg88\" (UniqueName: \"kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.521973 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.522025 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.625143 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.625242 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.625454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftg88\" (UniqueName: \"kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.625616 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.625865 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.651158 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftg88\" (UniqueName: \"kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88\") pod \"community-operators-g2gxn\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:03 crc kubenswrapper[4754]: I0105 20:51:03.748659 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:04 crc kubenswrapper[4754]: I0105 20:51:04.390703 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:04 crc kubenswrapper[4754]: I0105 20:51:04.762332 4754 generic.go:334] "Generic (PLEG): container finished" podID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerID="1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5" exitCode=0 Jan 05 20:51:04 crc kubenswrapper[4754]: I0105 20:51:04.762472 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerDied","Data":"1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5"} Jan 05 20:51:04 crc kubenswrapper[4754]: I0105 20:51:04.762752 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerStarted","Data":"64244cc5325529174386d9eb5bd6cfd6f9bff3ed2164675be4b1a78ec8872fdd"} Jan 05 20:51:06 crc kubenswrapper[4754]: I0105 20:51:06.789676 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerStarted","Data":"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade"} Jan 05 20:51:07 crc kubenswrapper[4754]: I0105 20:51:07.804634 4754 generic.go:334] "Generic (PLEG): container finished" podID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerID="8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade" exitCode=0 Jan 05 20:51:07 crc kubenswrapper[4754]: I0105 20:51:07.804690 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerDied","Data":"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade"} Jan 05 20:51:08 crc kubenswrapper[4754]: I0105 20:51:08.822468 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerStarted","Data":"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663"} Jan 05 20:51:08 crc kubenswrapper[4754]: I0105 20:51:08.844815 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g2gxn" podStartSLOduration=2.338003284 podStartE2EDuration="5.844795727s" podCreationTimestamp="2026-01-05 20:51:03 +0000 UTC" firstStartedPulling="2026-01-05 20:51:04.766395914 +0000 UTC m=+2751.475579798" lastFinishedPulling="2026-01-05 20:51:08.273188357 +0000 UTC m=+2754.982372241" observedRunningTime="2026-01-05 20:51:08.839063336 +0000 UTC m=+2755.548247220" watchObservedRunningTime="2026-01-05 20:51:08.844795727 +0000 UTC m=+2755.553979601" Jan 05 20:51:13 crc kubenswrapper[4754]: I0105 20:51:13.749621 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:13 crc kubenswrapper[4754]: I0105 20:51:13.750406 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:13 crc kubenswrapper[4754]: I0105 20:51:13.837321 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:13 crc kubenswrapper[4754]: I0105 20:51:13.966686 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:14 crc kubenswrapper[4754]: I0105 20:51:14.088919 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:15 crc kubenswrapper[4754]: I0105 20:51:15.903779 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g2gxn" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="registry-server" containerID="cri-o://a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663" gracePeriod=2 Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.475753 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.588855 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities\") pod \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.588952 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content\") pod \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.589031 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftg88\" (UniqueName: \"kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88\") pod \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\" (UID: \"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b\") " Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.589885 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities" (OuterVolumeSpecName: "utilities") pod "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" (UID: "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.594613 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88" (OuterVolumeSpecName: "kube-api-access-ftg88") pod "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" (UID: "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b"). InnerVolumeSpecName "kube-api-access-ftg88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.681924 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" (UID: "bff7e7c1-475d-4462-bcc0-6b4d540dfb4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.692728 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.692763 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.692774 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftg88\" (UniqueName: \"kubernetes.io/projected/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b-kube-api-access-ftg88\") on node \"crc\" DevicePath \"\"" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.919796 4754 generic.go:334] "Generic (PLEG): container finished" podID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerID="a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663" exitCode=0 Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.919836 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerDied","Data":"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663"} Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.919871 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2gxn" event={"ID":"bff7e7c1-475d-4462-bcc0-6b4d540dfb4b","Type":"ContainerDied","Data":"64244cc5325529174386d9eb5bd6cfd6f9bff3ed2164675be4b1a78ec8872fdd"} Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.919890 4754 scope.go:117] "RemoveContainer" containerID="a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.919925 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2gxn" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.948980 4754 scope.go:117] "RemoveContainer" containerID="8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.981501 4754 scope.go:117] "RemoveContainer" containerID="1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5" Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.985447 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:16 crc kubenswrapper[4754]: I0105 20:51:16.996546 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g2gxn"] Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.082409 4754 scope.go:117] "RemoveContainer" containerID="a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663" Jan 05 20:51:17 crc kubenswrapper[4754]: E0105 20:51:17.083090 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663\": container with ID starting with a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663 not found: ID does not exist" containerID="a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.083115 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663"} err="failed to get container status \"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663\": rpc error: code = NotFound desc = could not find container \"a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663\": container with ID starting with a94334a3324eaf0f1dc30e7f631607aab3724f9ff066a897b1b277b72a3a0663 not found: ID does not exist" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.083140 4754 scope.go:117] "RemoveContainer" containerID="8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade" Jan 05 20:51:17 crc kubenswrapper[4754]: E0105 20:51:17.083678 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade\": container with ID starting with 8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade not found: ID does not exist" containerID="8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.083739 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade"} err="failed to get container status \"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade\": rpc error: code = NotFound desc = could not find container \"8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade\": container with ID starting with 8c54e0254d3d02762770c85f7599d3533f4cad9c6538b6c067d3480e30ed8ade not found: ID does not exist" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.083785 4754 scope.go:117] "RemoveContainer" containerID="1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5" Jan 05 20:51:17 crc kubenswrapper[4754]: E0105 20:51:17.084172 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5\": container with ID starting with 1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5 not found: ID does not exist" containerID="1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.084224 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5"} err="failed to get container status \"1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5\": rpc error: code = NotFound desc = could not find container \"1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5\": container with ID starting with 1862269d39181163b3a588316b489be2b36167f33f10cad1f1dfef5c515c1cd5 not found: ID does not exist" Jan 05 20:51:17 crc kubenswrapper[4754]: I0105 20:51:17.607260 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" path="/var/lib/kubelet/pods/bff7e7c1-475d-4462-bcc0-6b4d540dfb4b/volumes" Jan 05 20:52:48 crc kubenswrapper[4754]: I0105 20:52:48.111618 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:52:48 crc kubenswrapper[4754]: I0105 20:52:48.112560 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:52:54 crc kubenswrapper[4754]: I0105 20:52:54.247913 4754 generic.go:334] "Generic (PLEG): container finished" podID="a4f1c57e-be1c-4790-a806-2e23b4324280" containerID="854f7f8bd836caea49f67f13cd4e528f5c855fcd62c734c0f6349eb6ead9a0f1" exitCode=0 Jan 05 20:52:54 crc kubenswrapper[4754]: I0105 20:52:54.248814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" event={"ID":"a4f1c57e-be1c-4790-a806-2e23b4324280","Type":"ContainerDied","Data":"854f7f8bd836caea49f67f13cd4e528f5c855fcd62c734c0f6349eb6ead9a0f1"} Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.829338 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.994455 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key\") pod \"a4f1c57e-be1c-4790-a806-2e23b4324280\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.995288 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory\") pod \"a4f1c57e-be1c-4790-a806-2e23b4324280\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.995527 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0\") pod \"a4f1c57e-be1c-4790-a806-2e23b4324280\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.995696 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v6ck\" (UniqueName: \"kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck\") pod \"a4f1c57e-be1c-4790-a806-2e23b4324280\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " Jan 05 20:52:55 crc kubenswrapper[4754]: I0105 20:52:55.995838 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle\") pod \"a4f1c57e-be1c-4790-a806-2e23b4324280\" (UID: \"a4f1c57e-be1c-4790-a806-2e23b4324280\") " Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.002446 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck" (OuterVolumeSpecName: "kube-api-access-8v6ck") pod "a4f1c57e-be1c-4790-a806-2e23b4324280" (UID: "a4f1c57e-be1c-4790-a806-2e23b4324280"). InnerVolumeSpecName "kube-api-access-8v6ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.002457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a4f1c57e-be1c-4790-a806-2e23b4324280" (UID: "a4f1c57e-be1c-4790-a806-2e23b4324280"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.029369 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4f1c57e-be1c-4790-a806-2e23b4324280" (UID: "a4f1c57e-be1c-4790-a806-2e23b4324280"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.044687 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory" (OuterVolumeSpecName: "inventory") pod "a4f1c57e-be1c-4790-a806-2e23b4324280" (UID: "a4f1c57e-be1c-4790-a806-2e23b4324280"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.068568 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a4f1c57e-be1c-4790-a806-2e23b4324280" (UID: "a4f1c57e-be1c-4790-a806-2e23b4324280"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.101286 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.101357 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.101371 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.101383 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a4f1c57e-be1c-4790-a806-2e23b4324280-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.101396 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v6ck\" (UniqueName: \"kubernetes.io/projected/a4f1c57e-be1c-4790-a806-2e23b4324280-kube-api-access-8v6ck\") on node \"crc\" DevicePath \"\"" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.271540 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" event={"ID":"a4f1c57e-be1c-4790-a806-2e23b4324280","Type":"ContainerDied","Data":"3b4c96c54dde224815608bcf7085cc29d18f3d1fcd00729d486ea382c46f2802"} Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.271579 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b4c96c54dde224815608bcf7085cc29d18f3d1fcd00729d486ea382c46f2802" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.271624 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.401273 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l"] Jan 05 20:52:56 crc kubenswrapper[4754]: E0105 20:52:56.402794 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="registry-server" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.402816 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="registry-server" Jan 05 20:52:56 crc kubenswrapper[4754]: E0105 20:52:56.402852 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f1c57e-be1c-4790-a806-2e23b4324280" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.402861 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f1c57e-be1c-4790-a806-2e23b4324280" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 20:52:56 crc kubenswrapper[4754]: E0105 20:52:56.402895 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="extract-utilities" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.402905 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="extract-utilities" Jan 05 20:52:56 crc kubenswrapper[4754]: E0105 20:52:56.402918 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="extract-content" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.402924 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="extract-content" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.403206 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff7e7c1-475d-4462-bcc0-6b4d540dfb4b" containerName="registry-server" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.403252 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f1c57e-be1c-4790-a806-2e23b4324280" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.404549 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.407867 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.408184 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.408551 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.408729 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.408985 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.409164 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.409347 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.417687 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l"] Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.511872 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.511931 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.511972 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.512169 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.512651 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.512760 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.512873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sn4l\" (UniqueName: \"kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.513124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.513379 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.616666 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.616794 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.616836 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.616900 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.616973 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sn4l\" (UniqueName: \"kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.617112 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.617260 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.617406 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.617478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.618399 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.622112 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.622494 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.622536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.623591 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.623779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.634157 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.634198 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.646422 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sn4l\" (UniqueName: \"kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cbc6l\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:56 crc kubenswrapper[4754]: I0105 20:52:56.741844 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:52:57 crc kubenswrapper[4754]: I0105 20:52:57.391720 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l"] Jan 05 20:52:57 crc kubenswrapper[4754]: I0105 20:52:57.402008 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:52:58 crc kubenswrapper[4754]: I0105 20:52:58.303207 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" event={"ID":"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25","Type":"ContainerStarted","Data":"e6848f705077fbde19836a02138d14420674d6261f7239dcd65b68e8e680cd2c"} Jan 05 20:52:59 crc kubenswrapper[4754]: I0105 20:52:59.319925 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" event={"ID":"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25","Type":"ContainerStarted","Data":"e7993d2ea8dca3fa32be9ee44fc32046187638a1c10c18ca6bb60fd32efdeb38"} Jan 05 20:52:59 crc kubenswrapper[4754]: I0105 20:52:59.353448 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" podStartSLOduration=2.788151362 podStartE2EDuration="3.353428495s" podCreationTimestamp="2026-01-05 20:52:56 +0000 UTC" firstStartedPulling="2026-01-05 20:52:57.401711973 +0000 UTC m=+2864.110895857" lastFinishedPulling="2026-01-05 20:52:57.966989086 +0000 UTC m=+2864.676172990" observedRunningTime="2026-01-05 20:52:59.348581078 +0000 UTC m=+2866.057764962" watchObservedRunningTime="2026-01-05 20:52:59.353428495 +0000 UTC m=+2866.062612379" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.376281 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.381929 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.391651 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.472546 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhh59\" (UniqueName: \"kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.473245 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.473531 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.576393 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.576491 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhh59\" (UniqueName: \"kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.576685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.577003 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.577279 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.606803 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhh59\" (UniqueName: \"kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59\") pod \"redhat-marketplace-d68tn\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:08 crc kubenswrapper[4754]: I0105 20:53:08.726189 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:09 crc kubenswrapper[4754]: I0105 20:53:09.312329 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:09 crc kubenswrapper[4754]: I0105 20:53:09.486862 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerStarted","Data":"a15ce167a83217e0ed8cab5aefb776277497fe35c168c4de669b49837a24f91d"} Jan 05 20:53:10 crc kubenswrapper[4754]: I0105 20:53:10.499602 4754 generic.go:334] "Generic (PLEG): container finished" podID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerID="31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185" exitCode=0 Jan 05 20:53:10 crc kubenswrapper[4754]: I0105 20:53:10.499699 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerDied","Data":"31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185"} Jan 05 20:53:12 crc kubenswrapper[4754]: I0105 20:53:12.536379 4754 generic.go:334] "Generic (PLEG): container finished" podID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerID="bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b" exitCode=0 Jan 05 20:53:12 crc kubenswrapper[4754]: I0105 20:53:12.536476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerDied","Data":"bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b"} Jan 05 20:53:14 crc kubenswrapper[4754]: I0105 20:53:14.591645 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerStarted","Data":"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181"} Jan 05 20:53:14 crc kubenswrapper[4754]: I0105 20:53:14.610144 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d68tn" podStartSLOduration=3.7664090740000002 podStartE2EDuration="6.610100453s" podCreationTimestamp="2026-01-05 20:53:08 +0000 UTC" firstStartedPulling="2026-01-05 20:53:10.502199295 +0000 UTC m=+2877.211383169" lastFinishedPulling="2026-01-05 20:53:13.345890644 +0000 UTC m=+2880.055074548" observedRunningTime="2026-01-05 20:53:14.607119024 +0000 UTC m=+2881.316302948" watchObservedRunningTime="2026-01-05 20:53:14.610100453 +0000 UTC m=+2881.319284337" Jan 05 20:53:18 crc kubenswrapper[4754]: I0105 20:53:18.109411 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:53:18 crc kubenswrapper[4754]: I0105 20:53:18.110135 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:53:18 crc kubenswrapper[4754]: I0105 20:53:18.728446 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:18 crc kubenswrapper[4754]: I0105 20:53:18.728500 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:18 crc kubenswrapper[4754]: I0105 20:53:18.787486 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:19 crc kubenswrapper[4754]: I0105 20:53:19.824022 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:19 crc kubenswrapper[4754]: I0105 20:53:19.896608 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:21 crc kubenswrapper[4754]: I0105 20:53:21.753542 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d68tn" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="registry-server" containerID="cri-o://c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181" gracePeriod=2 Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.371028 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.486927 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content\") pod \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.492142 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities\") pod \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.492202 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhh59\" (UniqueName: \"kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59\") pod \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\" (UID: \"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928\") " Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.493063 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities" (OuterVolumeSpecName: "utilities") pod "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" (UID: "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.493572 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.501939 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59" (OuterVolumeSpecName: "kube-api-access-qhh59") pod "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" (UID: "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928"). InnerVolumeSpecName "kube-api-access-qhh59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.535719 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" (UID: "cd386f91-6c6f-44cf-b1b2-fbf33d1c0928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.596030 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhh59\" (UniqueName: \"kubernetes.io/projected/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-kube-api-access-qhh59\") on node \"crc\" DevicePath \"\"" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.596075 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.774459 4754 generic.go:334] "Generic (PLEG): container finished" podID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerID="c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181" exitCode=0 Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.774532 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerDied","Data":"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181"} Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.774580 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d68tn" event={"ID":"cd386f91-6c6f-44cf-b1b2-fbf33d1c0928","Type":"ContainerDied","Data":"a15ce167a83217e0ed8cab5aefb776277497fe35c168c4de669b49837a24f91d"} Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.774638 4754 scope.go:117] "RemoveContainer" containerID="c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.774703 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d68tn" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.810800 4754 scope.go:117] "RemoveContainer" containerID="bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.841410 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.855725 4754 scope.go:117] "RemoveContainer" containerID="31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.863668 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d68tn"] Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.933582 4754 scope.go:117] "RemoveContainer" containerID="c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181" Jan 05 20:53:22 crc kubenswrapper[4754]: E0105 20:53:22.937666 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181\": container with ID starting with c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181 not found: ID does not exist" containerID="c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.937726 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181"} err="failed to get container status \"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181\": rpc error: code = NotFound desc = could not find container \"c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181\": container with ID starting with c1cc0e9d47970ddf20d038e4d524f5e1174abe98963efd0af73ed8b5a1919181 not found: ID does not exist" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.937761 4754 scope.go:117] "RemoveContainer" containerID="bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b" Jan 05 20:53:22 crc kubenswrapper[4754]: E0105 20:53:22.938326 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b\": container with ID starting with bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b not found: ID does not exist" containerID="bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.938381 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b"} err="failed to get container status \"bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b\": rpc error: code = NotFound desc = could not find container \"bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b\": container with ID starting with bd3065d09cf2e9850551e4c66f99c41f12e7f13bc33f03f2acbbbe520d55d53b not found: ID does not exist" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.938415 4754 scope.go:117] "RemoveContainer" containerID="31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185" Jan 05 20:53:22 crc kubenswrapper[4754]: E0105 20:53:22.942251 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185\": container with ID starting with 31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185 not found: ID does not exist" containerID="31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185" Jan 05 20:53:22 crc kubenswrapper[4754]: I0105 20:53:22.942307 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185"} err="failed to get container status \"31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185\": rpc error: code = NotFound desc = could not find container \"31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185\": container with ID starting with 31020518965edaabcbd017f83d048da4028e64b7df2fca40502beefb8df7a185 not found: ID does not exist" Jan 05 20:53:23 crc kubenswrapper[4754]: I0105 20:53:23.616974 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" path="/var/lib/kubelet/pods/cd386f91-6c6f-44cf-b1b2-fbf33d1c0928/volumes" Jan 05 20:53:48 crc kubenswrapper[4754]: I0105 20:53:48.109024 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 20:53:48 crc kubenswrapper[4754]: I0105 20:53:48.110041 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 20:53:48 crc kubenswrapper[4754]: I0105 20:53:48.110103 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 20:53:48 crc kubenswrapper[4754]: I0105 20:53:48.111164 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 20:53:48 crc kubenswrapper[4754]: I0105 20:53:48.111256 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" gracePeriod=600 Jan 05 20:53:48 crc kubenswrapper[4754]: E0105 20:53:48.252477 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:53:48 crc kubenswrapper[4754]: E0105 20:53:48.308083 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:53:48 crc kubenswrapper[4754]: E0105 20:53:48.308082 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d.scope\": RecentStats: unable to find data in memory cache]" Jan 05 20:53:49 crc kubenswrapper[4754]: I0105 20:53:49.155600 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" exitCode=0 Jan 05 20:53:49 crc kubenswrapper[4754]: I0105 20:53:49.155717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d"} Jan 05 20:53:49 crc kubenswrapper[4754]: I0105 20:53:49.156129 4754 scope.go:117] "RemoveContainer" containerID="379e38a6e538bdb337f18c8539963e93491cdbc565bbafbe7ab4a0cef313f65d" Jan 05 20:53:49 crc kubenswrapper[4754]: I0105 20:53:49.157400 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:53:49 crc kubenswrapper[4754]: E0105 20:53:49.157989 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:54:01 crc kubenswrapper[4754]: I0105 20:54:01.589415 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:54:01 crc kubenswrapper[4754]: E0105 20:54:01.590875 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:54:12 crc kubenswrapper[4754]: I0105 20:54:12.589580 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:54:12 crc kubenswrapper[4754]: E0105 20:54:12.590946 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:54:27 crc kubenswrapper[4754]: I0105 20:54:27.589055 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:54:27 crc kubenswrapper[4754]: E0105 20:54:27.589913 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:54:38 crc kubenswrapper[4754]: I0105 20:54:38.589707 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:54:38 crc kubenswrapper[4754]: E0105 20:54:38.590858 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:54:51 crc kubenswrapper[4754]: I0105 20:54:51.589481 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:54:51 crc kubenswrapper[4754]: E0105 20:54:51.591697 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:05 crc kubenswrapper[4754]: I0105 20:55:05.589477 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:55:05 crc kubenswrapper[4754]: E0105 20:55:05.591169 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:16 crc kubenswrapper[4754]: I0105 20:55:16.588462 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:55:16 crc kubenswrapper[4754]: E0105 20:55:16.589408 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:31 crc kubenswrapper[4754]: I0105 20:55:31.600342 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:55:31 crc kubenswrapper[4754]: E0105 20:55:31.626649 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.708449 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:33 crc kubenswrapper[4754]: E0105 20:55:33.712145 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="registry-server" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.712277 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="registry-server" Jan 05 20:55:33 crc kubenswrapper[4754]: E0105 20:55:33.712502 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="extract-utilities" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.712608 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="extract-utilities" Jan 05 20:55:33 crc kubenswrapper[4754]: E0105 20:55:33.712688 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="extract-content" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.712758 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="extract-content" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.723018 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd386f91-6c6f-44cf-b1b2-fbf33d1c0928" containerName="registry-server" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.725849 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.741752 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.876798 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt2m\" (UniqueName: \"kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.877221 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.878255 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.988785 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.989275 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.989554 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnt2m\" (UniqueName: \"kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.989580 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:33 crc kubenswrapper[4754]: I0105 20:55:33.990170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:34 crc kubenswrapper[4754]: I0105 20:55:34.016731 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnt2m\" (UniqueName: \"kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m\") pod \"certified-operators-wl2kv\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:34 crc kubenswrapper[4754]: I0105 20:55:34.082653 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:34 crc kubenswrapper[4754]: I0105 20:55:34.727446 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:35 crc kubenswrapper[4754]: I0105 20:55:35.699920 4754 generic.go:334] "Generic (PLEG): container finished" podID="9e487197-b547-4a89-be37-2401a48bf932" containerID="2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0" exitCode=0 Jan 05 20:55:35 crc kubenswrapper[4754]: I0105 20:55:35.700594 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerDied","Data":"2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0"} Jan 05 20:55:35 crc kubenswrapper[4754]: I0105 20:55:35.700627 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerStarted","Data":"f6568249130e8c039b667271f3ba54f99f33a7c51f125bda634b4ebeb7126141"} Jan 05 20:55:36 crc kubenswrapper[4754]: I0105 20:55:36.712647 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerStarted","Data":"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f"} Jan 05 20:55:38 crc kubenswrapper[4754]: I0105 20:55:38.745543 4754 generic.go:334] "Generic (PLEG): container finished" podID="9e487197-b547-4a89-be37-2401a48bf932" containerID="b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f" exitCode=0 Jan 05 20:55:38 crc kubenswrapper[4754]: I0105 20:55:38.745667 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerDied","Data":"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f"} Jan 05 20:55:39 crc kubenswrapper[4754]: I0105 20:55:39.760912 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerStarted","Data":"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e"} Jan 05 20:55:39 crc kubenswrapper[4754]: I0105 20:55:39.790555 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wl2kv" podStartSLOduration=3.332678603 podStartE2EDuration="6.79053772s" podCreationTimestamp="2026-01-05 20:55:33 +0000 UTC" firstStartedPulling="2026-01-05 20:55:35.704004214 +0000 UTC m=+3022.413188098" lastFinishedPulling="2026-01-05 20:55:39.161863341 +0000 UTC m=+3025.871047215" observedRunningTime="2026-01-05 20:55:39.787681925 +0000 UTC m=+3026.496865799" watchObservedRunningTime="2026-01-05 20:55:39.79053772 +0000 UTC m=+3026.499721594" Jan 05 20:55:42 crc kubenswrapper[4754]: I0105 20:55:42.589681 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:55:42 crc kubenswrapper[4754]: E0105 20:55:42.590650 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:44 crc kubenswrapper[4754]: I0105 20:55:44.082886 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:44 crc kubenswrapper[4754]: I0105 20:55:44.082957 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:44 crc kubenswrapper[4754]: I0105 20:55:44.139612 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:44 crc kubenswrapper[4754]: I0105 20:55:44.873994 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:44 crc kubenswrapper[4754]: I0105 20:55:44.929520 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:46 crc kubenswrapper[4754]: I0105 20:55:46.830257 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wl2kv" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="registry-server" containerID="cri-o://24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e" gracePeriod=2 Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.472666 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.563790 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities\") pod \"9e487197-b547-4a89-be37-2401a48bf932\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.564067 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content\") pod \"9e487197-b547-4a89-be37-2401a48bf932\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.564271 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnt2m\" (UniqueName: \"kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m\") pod \"9e487197-b547-4a89-be37-2401a48bf932\" (UID: \"9e487197-b547-4a89-be37-2401a48bf932\") " Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.565709 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities" (OuterVolumeSpecName: "utilities") pod "9e487197-b547-4a89-be37-2401a48bf932" (UID: "9e487197-b547-4a89-be37-2401a48bf932"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.574267 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m" (OuterVolumeSpecName: "kube-api-access-cnt2m") pod "9e487197-b547-4a89-be37-2401a48bf932" (UID: "9e487197-b547-4a89-be37-2401a48bf932"). InnerVolumeSpecName "kube-api-access-cnt2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.631842 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e487197-b547-4a89-be37-2401a48bf932" (UID: "9e487197-b547-4a89-be37-2401a48bf932"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.668498 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.668551 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e487197-b547-4a89-be37-2401a48bf932-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.668575 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnt2m\" (UniqueName: \"kubernetes.io/projected/9e487197-b547-4a89-be37-2401a48bf932-kube-api-access-cnt2m\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.843490 4754 generic.go:334] "Generic (PLEG): container finished" podID="9e487197-b547-4a89-be37-2401a48bf932" containerID="24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e" exitCode=0 Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.843560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerDied","Data":"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e"} Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.843586 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl2kv" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.843614 4754 scope.go:117] "RemoveContainer" containerID="24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.843601 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl2kv" event={"ID":"9e487197-b547-4a89-be37-2401a48bf932","Type":"ContainerDied","Data":"f6568249130e8c039b667271f3ba54f99f33a7c51f125bda634b4ebeb7126141"} Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.868706 4754 scope.go:117] "RemoveContainer" containerID="b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.885108 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.903936 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wl2kv"] Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.909883 4754 scope.go:117] "RemoveContainer" containerID="2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.955485 4754 scope.go:117] "RemoveContainer" containerID="24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e" Jan 05 20:55:47 crc kubenswrapper[4754]: E0105 20:55:47.955994 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e\": container with ID starting with 24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e not found: ID does not exist" containerID="24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.956055 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e"} err="failed to get container status \"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e\": rpc error: code = NotFound desc = could not find container \"24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e\": container with ID starting with 24d08f2391ed98957c6d3adc461be9830d35cc1dcd2d035c2e97c754d3c2aa1e not found: ID does not exist" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.956086 4754 scope.go:117] "RemoveContainer" containerID="b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f" Jan 05 20:55:47 crc kubenswrapper[4754]: E0105 20:55:47.956421 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f\": container with ID starting with b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f not found: ID does not exist" containerID="b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.956442 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f"} err="failed to get container status \"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f\": rpc error: code = NotFound desc = could not find container \"b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f\": container with ID starting with b40cd311219a7f8bf4566565a28d000cec35ef89170c6bad7181b2b0a2d6b24f not found: ID does not exist" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.956472 4754 scope.go:117] "RemoveContainer" containerID="2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0" Jan 05 20:55:47 crc kubenswrapper[4754]: E0105 20:55:47.956718 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0\": container with ID starting with 2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0 not found: ID does not exist" containerID="2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0" Jan 05 20:55:47 crc kubenswrapper[4754]: I0105 20:55:47.956735 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0"} err="failed to get container status \"2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0\": rpc error: code = NotFound desc = could not find container \"2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0\": container with ID starting with 2936ed40b95d9b7a42d9714a508c73987237506f809f1117a95da7b4d4de70f0 not found: ID does not exist" Jan 05 20:55:49 crc kubenswrapper[4754]: I0105 20:55:49.604475 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e487197-b547-4a89-be37-2401a48bf932" path="/var/lib/kubelet/pods/9e487197-b547-4a89-be37-2401a48bf932/volumes" Jan 05 20:55:52 crc kubenswrapper[4754]: I0105 20:55:52.917201 4754 generic.go:334] "Generic (PLEG): container finished" podID="ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" containerID="e7993d2ea8dca3fa32be9ee44fc32046187638a1c10c18ca6bb60fd32efdeb38" exitCode=0 Jan 05 20:55:52 crc kubenswrapper[4754]: I0105 20:55:52.918063 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" event={"ID":"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25","Type":"ContainerDied","Data":"e7993d2ea8dca3fa32be9ee44fc32046187638a1c10c18ca6bb60fd32efdeb38"} Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.473239 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551241 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551401 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551452 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sn4l\" (UniqueName: \"kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551518 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551568 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551585 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551654 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551732 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.551799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0\") pod \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\" (UID: \"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25\") " Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.599427 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.599438 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l" (OuterVolumeSpecName: "kube-api-access-9sn4l") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "kube-api-access-9sn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.603396 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory" (OuterVolumeSpecName: "inventory") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.603706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.605986 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.611083 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.618662 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.620591 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.627116 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" (UID: "ed63fc99-0c8f-47ce-8d5d-71b98edf3a25"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.654822 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655198 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sn4l\" (UniqueName: \"kubernetes.io/projected/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-kube-api-access-9sn4l\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655217 4754 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655229 4754 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655241 4754 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655253 4754 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655264 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655277 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.655348 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ed63fc99-0c8f-47ce-8d5d-71b98edf3a25-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.937263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" event={"ID":"ed63fc99-0c8f-47ce-8d5d-71b98edf3a25","Type":"ContainerDied","Data":"e6848f705077fbde19836a02138d14420674d6261f7239dcd65b68e8e680cd2c"} Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.937317 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6848f705077fbde19836a02138d14420674d6261f7239dcd65b68e8e680cd2c" Jan 05 20:55:54 crc kubenswrapper[4754]: I0105 20:55:54.937367 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cbc6l" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.060903 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47"] Jan 05 20:55:55 crc kubenswrapper[4754]: E0105 20:55:55.061477 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="extract-content" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061499 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="extract-content" Jan 05 20:55:55 crc kubenswrapper[4754]: E0105 20:55:55.061526 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="registry-server" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061533 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="registry-server" Jan 05 20:55:55 crc kubenswrapper[4754]: E0105 20:55:55.061547 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="extract-utilities" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061554 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="extract-utilities" Jan 05 20:55:55 crc kubenswrapper[4754]: E0105 20:55:55.061569 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061575 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061816 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e487197-b547-4a89-be37-2401a48bf932" containerName="registry-server" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.061836 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed63fc99-0c8f-47ce-8d5d-71b98edf3a25" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.062709 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.065176 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.065396 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.065598 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.065783 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.066073 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.078241 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47"] Jan 05 20:55:55 crc kubenswrapper[4754]: E0105 20:55:55.173612 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded63fc99_0c8f_47ce_8d5d_71b98edf3a25.slice/crio-e6848f705077fbde19836a02138d14420674d6261f7239dcd65b68e8e680cd2c\": RecentStats: unable to find data in memory cache]" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.181968 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.182031 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.182092 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.182242 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.182600 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.183014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfm44\" (UniqueName: \"kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.183146 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.285861 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfm44\" (UniqueName: \"kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.285915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.285978 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.286003 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.286043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.286060 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.286122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.292441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.292772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.292804 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.292852 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.292855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.294828 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.303842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfm44\" (UniqueName: \"kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qtv47\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:55 crc kubenswrapper[4754]: I0105 20:55:55.392120 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:55:56 crc kubenswrapper[4754]: I0105 20:55:56.169799 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47"] Jan 05 20:55:56 crc kubenswrapper[4754]: I0105 20:55:56.589130 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:55:56 crc kubenswrapper[4754]: E0105 20:55:56.590026 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:55:56 crc kubenswrapper[4754]: I0105 20:55:56.960875 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" event={"ID":"a91d7133-4aa5-4931-9311-d3d7ccdd3322","Type":"ContainerStarted","Data":"c9b759f04ef26ca7babae563bc32c2df6dfc0278832c619f2edc398b5462a9b5"} Jan 05 20:55:56 crc kubenswrapper[4754]: I0105 20:55:56.961229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" event={"ID":"a91d7133-4aa5-4931-9311-d3d7ccdd3322","Type":"ContainerStarted","Data":"32f69021e93568a9b42cbdae9fd418d03648765b63176888555e843461b49095"} Jan 05 20:55:56 crc kubenswrapper[4754]: I0105 20:55:56.991760 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" podStartSLOduration=1.481379409 podStartE2EDuration="1.991735253s" podCreationTimestamp="2026-01-05 20:55:55 +0000 UTC" firstStartedPulling="2026-01-05 20:55:56.172972849 +0000 UTC m=+3042.882156743" lastFinishedPulling="2026-01-05 20:55:56.683328713 +0000 UTC m=+3043.392512587" observedRunningTime="2026-01-05 20:55:56.981944724 +0000 UTC m=+3043.691128598" watchObservedRunningTime="2026-01-05 20:55:56.991735253 +0000 UTC m=+3043.700919137" Jan 05 20:56:09 crc kubenswrapper[4754]: I0105 20:56:09.588846 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:56:09 crc kubenswrapper[4754]: E0105 20:56:09.589986 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:56:23 crc kubenswrapper[4754]: I0105 20:56:23.590606 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:56:23 crc kubenswrapper[4754]: E0105 20:56:23.592358 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:56:37 crc kubenswrapper[4754]: I0105 20:56:37.591427 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:56:37 crc kubenswrapper[4754]: E0105 20:56:37.592741 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:56:48 crc kubenswrapper[4754]: I0105 20:56:48.590008 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:56:48 crc kubenswrapper[4754]: E0105 20:56:48.591042 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:57:00 crc kubenswrapper[4754]: I0105 20:57:00.589651 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:57:00 crc kubenswrapper[4754]: E0105 20:57:00.591007 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:57:13 crc kubenswrapper[4754]: I0105 20:57:13.608641 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:57:13 crc kubenswrapper[4754]: E0105 20:57:13.609841 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:57:28 crc kubenswrapper[4754]: I0105 20:57:28.589080 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:57:28 crc kubenswrapper[4754]: E0105 20:57:28.590154 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:57:43 crc kubenswrapper[4754]: I0105 20:57:43.608543 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:57:43 crc kubenswrapper[4754]: E0105 20:57:43.611261 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:57:57 crc kubenswrapper[4754]: I0105 20:57:57.588960 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:57:57 crc kubenswrapper[4754]: E0105 20:57:57.590050 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:58:12 crc kubenswrapper[4754]: I0105 20:58:12.589254 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:58:12 crc kubenswrapper[4754]: E0105 20:58:12.590847 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:58:26 crc kubenswrapper[4754]: I0105 20:58:26.588663 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:58:26 crc kubenswrapper[4754]: E0105 20:58:26.589735 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:58:39 crc kubenswrapper[4754]: I0105 20:58:39.595686 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:58:39 crc kubenswrapper[4754]: E0105 20:58:39.596443 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 20:58:42 crc kubenswrapper[4754]: I0105 20:58:42.968836 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:58:42 crc kubenswrapper[4754]: I0105 20:58:42.974126 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.000398 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.136040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.136129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.136157 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knwd\" (UniqueName: \"kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.239454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.239546 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.239582 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knwd\" (UniqueName: \"kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.239990 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.240200 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.276864 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knwd\" (UniqueName: \"kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd\") pod \"redhat-operators-8nmlb\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.303680 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:43 crc kubenswrapper[4754]: I0105 20:58:43.797622 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:58:44 crc kubenswrapper[4754]: I0105 20:58:44.353511 4754 generic.go:334] "Generic (PLEG): container finished" podID="968cd7fb-5990-4783-a52c-985023f50e56" containerID="5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0" exitCode=0 Jan 05 20:58:44 crc kubenswrapper[4754]: I0105 20:58:44.353552 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerDied","Data":"5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0"} Jan 05 20:58:44 crc kubenswrapper[4754]: I0105 20:58:44.353838 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerStarted","Data":"24d97d4b03fafaa19211cb8dd188b9f157ee853c18ba83e90c39869f7c33da53"} Jan 05 20:58:44 crc kubenswrapper[4754]: I0105 20:58:44.355423 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 20:58:46 crc kubenswrapper[4754]: I0105 20:58:46.379459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerStarted","Data":"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119"} Jan 05 20:58:49 crc kubenswrapper[4754]: I0105 20:58:49.419481 4754 generic.go:334] "Generic (PLEG): container finished" podID="968cd7fb-5990-4783-a52c-985023f50e56" containerID="bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119" exitCode=0 Jan 05 20:58:49 crc kubenswrapper[4754]: I0105 20:58:49.419590 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerDied","Data":"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119"} Jan 05 20:58:50 crc kubenswrapper[4754]: I0105 20:58:50.673629 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerStarted","Data":"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5"} Jan 05 20:58:50 crc kubenswrapper[4754]: I0105 20:58:50.719003 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8nmlb" podStartSLOduration=3.204493817 podStartE2EDuration="8.71898058s" podCreationTimestamp="2026-01-05 20:58:42 +0000 UTC" firstStartedPulling="2026-01-05 20:58:44.355116996 +0000 UTC m=+3211.064300870" lastFinishedPulling="2026-01-05 20:58:49.869603759 +0000 UTC m=+3216.578787633" observedRunningTime="2026-01-05 20:58:50.713609568 +0000 UTC m=+3217.422793462" watchObservedRunningTime="2026-01-05 20:58:50.71898058 +0000 UTC m=+3217.428164454" Jan 05 20:58:53 crc kubenswrapper[4754]: I0105 20:58:53.304387 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:53 crc kubenswrapper[4754]: I0105 20:58:53.304920 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:58:53 crc kubenswrapper[4754]: I0105 20:58:53.599638 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 20:58:54 crc kubenswrapper[4754]: I0105 20:58:54.416314 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8nmlb" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="registry-server" probeResult="failure" output=< Jan 05 20:58:54 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 20:58:54 crc kubenswrapper[4754]: > Jan 05 20:58:54 crc kubenswrapper[4754]: I0105 20:58:54.723051 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef"} Jan 05 20:59:03 crc kubenswrapper[4754]: I0105 20:59:03.359317 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:59:03 crc kubenswrapper[4754]: I0105 20:59:03.415407 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:59:03 crc kubenswrapper[4754]: I0105 20:59:03.636714 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:59:04 crc kubenswrapper[4754]: I0105 20:59:04.878122 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8nmlb" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="registry-server" containerID="cri-o://1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5" gracePeriod=2 Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.467854 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.660155 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knwd\" (UniqueName: \"kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd\") pod \"968cd7fb-5990-4783-a52c-985023f50e56\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.660374 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content\") pod \"968cd7fb-5990-4783-a52c-985023f50e56\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.660467 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities\") pod \"968cd7fb-5990-4783-a52c-985023f50e56\" (UID: \"968cd7fb-5990-4783-a52c-985023f50e56\") " Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.661121 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities" (OuterVolumeSpecName: "utilities") pod "968cd7fb-5990-4783-a52c-985023f50e56" (UID: "968cd7fb-5990-4783-a52c-985023f50e56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.661810 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.669917 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd" (OuterVolumeSpecName: "kube-api-access-2knwd") pod "968cd7fb-5990-4783-a52c-985023f50e56" (UID: "968cd7fb-5990-4783-a52c-985023f50e56"). InnerVolumeSpecName "kube-api-access-2knwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.765506 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knwd\" (UniqueName: \"kubernetes.io/projected/968cd7fb-5990-4783-a52c-985023f50e56-kube-api-access-2knwd\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.814011 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "968cd7fb-5990-4783-a52c-985023f50e56" (UID: "968cd7fb-5990-4783-a52c-985023f50e56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.868076 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968cd7fb-5990-4783-a52c-985023f50e56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.892503 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8nmlb" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.892562 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerDied","Data":"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5"} Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.892442 4754 generic.go:334] "Generic (PLEG): container finished" podID="968cd7fb-5990-4783-a52c-985023f50e56" containerID="1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5" exitCode=0 Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.892605 4754 scope.go:117] "RemoveContainer" containerID="1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.892673 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8nmlb" event={"ID":"968cd7fb-5990-4783-a52c-985023f50e56","Type":"ContainerDied","Data":"24d97d4b03fafaa19211cb8dd188b9f157ee853c18ba83e90c39869f7c33da53"} Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.954785 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.958207 4754 scope.go:117] "RemoveContainer" containerID="bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119" Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.967472 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8nmlb"] Jan 05 20:59:05 crc kubenswrapper[4754]: I0105 20:59:05.995073 4754 scope.go:117] "RemoveContainer" containerID="5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.068676 4754 scope.go:117] "RemoveContainer" containerID="1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5" Jan 05 20:59:06 crc kubenswrapper[4754]: E0105 20:59:06.069066 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5\": container with ID starting with 1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5 not found: ID does not exist" containerID="1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.069135 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5"} err="failed to get container status \"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5\": rpc error: code = NotFound desc = could not find container \"1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5\": container with ID starting with 1a09ddb3387dd8b2eb6713389aae725d5ebf1cd427ca90c00155ce1627fdf5f5 not found: ID does not exist" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.069164 4754 scope.go:117] "RemoveContainer" containerID="bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119" Jan 05 20:59:06 crc kubenswrapper[4754]: E0105 20:59:06.069530 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119\": container with ID starting with bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119 not found: ID does not exist" containerID="bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.069559 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119"} err="failed to get container status \"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119\": rpc error: code = NotFound desc = could not find container \"bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119\": container with ID starting with bfc2158acd261c07c48cd532994c6a84b02528cebcc83629e400ae2aa2543119 not found: ID does not exist" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.069574 4754 scope.go:117] "RemoveContainer" containerID="5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0" Jan 05 20:59:06 crc kubenswrapper[4754]: E0105 20:59:06.069852 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0\": container with ID starting with 5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0 not found: ID does not exist" containerID="5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0" Jan 05 20:59:06 crc kubenswrapper[4754]: I0105 20:59:06.069903 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0"} err="failed to get container status \"5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0\": rpc error: code = NotFound desc = could not find container \"5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0\": container with ID starting with 5c5c46421a0ac905114017da4990622b4d24ef824e39ffd2e8bbd051b5fea7b0 not found: ID does not exist" Jan 05 20:59:07 crc kubenswrapper[4754]: I0105 20:59:07.611634 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968cd7fb-5990-4783-a52c-985023f50e56" path="/var/lib/kubelet/pods/968cd7fb-5990-4783-a52c-985023f50e56/volumes" Jan 05 20:59:09 crc kubenswrapper[4754]: I0105 20:59:09.950977 4754 generic.go:334] "Generic (PLEG): container finished" podID="a91d7133-4aa5-4931-9311-d3d7ccdd3322" containerID="c9b759f04ef26ca7babae563bc32c2df6dfc0278832c619f2edc398b5462a9b5" exitCode=0 Jan 05 20:59:09 crc kubenswrapper[4754]: I0105 20:59:09.951043 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" event={"ID":"a91d7133-4aa5-4931-9311-d3d7ccdd3322","Type":"ContainerDied","Data":"c9b759f04ef26ca7babae563bc32c2df6dfc0278832c619f2edc398b5462a9b5"} Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.658514 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.745522 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.746611 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfm44\" (UniqueName: \"kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.746737 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.747170 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.747560 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.747645 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.747694 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key\") pod \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\" (UID: \"a91d7133-4aa5-4931-9311-d3d7ccdd3322\") " Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.752518 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44" (OuterVolumeSpecName: "kube-api-access-kfm44") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "kube-api-access-kfm44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.769855 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.787308 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory" (OuterVolumeSpecName: "inventory") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.788901 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.799632 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.804829 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.808465 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a91d7133-4aa5-4931-9311-d3d7ccdd3322" (UID: "a91d7133-4aa5-4931-9311-d3d7ccdd3322"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851139 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851162 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851173 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851182 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851190 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851201 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfm44\" (UniqueName: \"kubernetes.io/projected/a91d7133-4aa5-4931-9311-d3d7ccdd3322-kube-api-access-kfm44\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.851209 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a91d7133-4aa5-4931-9311-d3d7ccdd3322-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.996952 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" event={"ID":"a91d7133-4aa5-4931-9311-d3d7ccdd3322","Type":"ContainerDied","Data":"32f69021e93568a9b42cbdae9fd418d03648765b63176888555e843461b49095"} Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.997005 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f69021e93568a9b42cbdae9fd418d03648765b63176888555e843461b49095" Jan 05 20:59:11 crc kubenswrapper[4754]: I0105 20:59:11.997096 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qtv47" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.096212 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j"] Jan 05 20:59:12 crc kubenswrapper[4754]: E0105 20:59:12.097001 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="extract-content" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097032 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="extract-content" Jan 05 20:59:12 crc kubenswrapper[4754]: E0105 20:59:12.097053 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="registry-server" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097067 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="registry-server" Jan 05 20:59:12 crc kubenswrapper[4754]: E0105 20:59:12.097135 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91d7133-4aa5-4931-9311-d3d7ccdd3322" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097150 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91d7133-4aa5-4931-9311-d3d7ccdd3322" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 20:59:12 crc kubenswrapper[4754]: E0105 20:59:12.097200 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="extract-utilities" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097214 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="extract-utilities" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097620 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="968cd7fb-5990-4783-a52c-985023f50e56" containerName="registry-server" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.097662 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91d7133-4aa5-4931-9311-d3d7ccdd3322" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.099267 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.102010 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.102475 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.102911 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.103175 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.103196 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.110125 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j"] Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.260189 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.260693 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.260876 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54cc\" (UniqueName: \"kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.260987 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.261036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.261108 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.261318 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.365361 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.365605 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.365748 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.365937 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54cc\" (UniqueName: \"kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.366134 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.366219 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.366330 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.370460 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.370519 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.371067 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.371738 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.372873 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.374549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.394262 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54cc\" (UniqueName: \"kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:12 crc kubenswrapper[4754]: I0105 20:59:12.431234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 20:59:13 crc kubenswrapper[4754]: I0105 20:59:13.080470 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j"] Jan 05 20:59:13 crc kubenswrapper[4754]: I0105 20:59:13.597449 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 20:59:14 crc kubenswrapper[4754]: I0105 20:59:14.038710 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" event={"ID":"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25","Type":"ContainerStarted","Data":"2d9e1316575d76a9b64ce1a60e897ed66ecfb0119ad9888937a878a9cddde3a5"} Jan 05 20:59:14 crc kubenswrapper[4754]: I0105 20:59:14.038778 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" event={"ID":"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25","Type":"ContainerStarted","Data":"651588dc96f0edaaec80cc519f176ebdc1414c7b8f31e75c979adccf5ad0c33a"} Jan 05 20:59:14 crc kubenswrapper[4754]: I0105 20:59:14.080907 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" podStartSLOduration=1.582067646 podStartE2EDuration="2.080883576s" podCreationTimestamp="2026-01-05 20:59:12 +0000 UTC" firstStartedPulling="2026-01-05 20:59:13.093400605 +0000 UTC m=+3239.802584519" lastFinishedPulling="2026-01-05 20:59:13.592216525 +0000 UTC m=+3240.301400449" observedRunningTime="2026-01-05 20:59:14.065914492 +0000 UTC m=+3240.775098406" watchObservedRunningTime="2026-01-05 20:59:14.080883576 +0000 UTC m=+3240.790067460" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.180615 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm"] Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.183270 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.186887 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.188753 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.220334 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm"] Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.267020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.267261 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxjm\" (UniqueName: \"kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.267350 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.369340 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxjm\" (UniqueName: \"kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.369427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.369496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.370419 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.377015 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.395022 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxjm\" (UniqueName: \"kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm\") pod \"collect-profiles-29460780-b68fm\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:00 crc kubenswrapper[4754]: I0105 21:00:00.508997 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:01 crc kubenswrapper[4754]: I0105 21:00:01.043989 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm"] Jan 05 21:00:01 crc kubenswrapper[4754]: I0105 21:00:01.611177 4754 generic.go:334] "Generic (PLEG): container finished" podID="ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" containerID="5d656e64bc8d9068117d865479039cbe4a70e78b2122398fe4dc98b5d2c0d93b" exitCode=0 Jan 05 21:00:01 crc kubenswrapper[4754]: I0105 21:00:01.614484 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" event={"ID":"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0","Type":"ContainerDied","Data":"5d656e64bc8d9068117d865479039cbe4a70e78b2122398fe4dc98b5d2c0d93b"} Jan 05 21:00:01 crc kubenswrapper[4754]: I0105 21:00:01.614537 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" event={"ID":"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0","Type":"ContainerStarted","Data":"bf2304f965e837b048eb9881b461cdb62c0d8c80b8b2589ca4b508403f1292f1"} Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.177833 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.284048 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume\") pod \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.284281 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume\") pod \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.284366 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdxjm\" (UniqueName: \"kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm\") pod \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\" (UID: \"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0\") " Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.285952 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" (UID: "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.290019 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" (UID: "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.304554 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm" (OuterVolumeSpecName: "kube-api-access-jdxjm") pod "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" (UID: "ce3a9164-7ae2-4f53-82a3-d2792d25b5d0"). InnerVolumeSpecName "kube-api-access-jdxjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.388070 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdxjm\" (UniqueName: \"kubernetes.io/projected/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-kube-api-access-jdxjm\") on node \"crc\" DevicePath \"\"" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.388118 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.388129 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.642624 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" event={"ID":"ce3a9164-7ae2-4f53-82a3-d2792d25b5d0","Type":"ContainerDied","Data":"bf2304f965e837b048eb9881b461cdb62c0d8c80b8b2589ca4b508403f1292f1"} Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.642966 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf2304f965e837b048eb9881b461cdb62c0d8c80b8b2589ca4b508403f1292f1" Jan 05 21:00:03 crc kubenswrapper[4754]: I0105 21:00:03.643040 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm" Jan 05 21:00:04 crc kubenswrapper[4754]: I0105 21:00:04.275466 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4"] Jan 05 21:00:04 crc kubenswrapper[4754]: I0105 21:00:04.290559 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460735-t7mv4"] Jan 05 21:00:05 crc kubenswrapper[4754]: I0105 21:00:05.619629 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d086b5-d537-4b86-94a3-34dd18984ee4" path="/var/lib/kubelet/pods/74d086b5-d537-4b86-94a3-34dd18984ee4/volumes" Jan 05 21:00:10 crc kubenswrapper[4754]: I0105 21:00:10.398232 4754 scope.go:117] "RemoveContainer" containerID="c2b2ec3ce83a2d059e121ec1271c585ef7269382bf42852e6c2b4950ddd4d17f" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.175459 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29460781-ln2tm"] Jan 05 21:01:00 crc kubenswrapper[4754]: E0105 21:01:00.179042 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" containerName="collect-profiles" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.179271 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" containerName="collect-profiles" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.180163 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" containerName="collect-profiles" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.182038 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.193495 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460781-ln2tm"] Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.331023 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.331717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.331956 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.332062 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sh5\" (UniqueName: \"kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.436528 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.437150 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49sh5\" (UniqueName: \"kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.437343 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.437444 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.446307 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.446656 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.447208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.470193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sh5\" (UniqueName: \"kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5\") pod \"keystone-cron-29460781-ln2tm\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:00 crc kubenswrapper[4754]: I0105 21:01:00.514579 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:01 crc kubenswrapper[4754]: I0105 21:01:01.052100 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460781-ln2tm"] Jan 05 21:01:01 crc kubenswrapper[4754]: I0105 21:01:01.455264 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460781-ln2tm" event={"ID":"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69","Type":"ContainerStarted","Data":"1667a83dd0bd774fa45f936bce2071271fa13651a7bf0572cb06954698f8af0b"} Jan 05 21:01:01 crc kubenswrapper[4754]: I0105 21:01:01.455642 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460781-ln2tm" event={"ID":"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69","Type":"ContainerStarted","Data":"5544601f3368cb3e6b6b3b34acfe5e1226e8b589b3e71619fd96ae72ee24f42a"} Jan 05 21:01:01 crc kubenswrapper[4754]: I0105 21:01:01.480012 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29460781-ln2tm" podStartSLOduration=1.479995741 podStartE2EDuration="1.479995741s" podCreationTimestamp="2026-01-05 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:01:01.473977292 +0000 UTC m=+3348.183161166" watchObservedRunningTime="2026-01-05 21:01:01.479995741 +0000 UTC m=+3348.189179615" Jan 05 21:01:04 crc kubenswrapper[4754]: I0105 21:01:04.501388 4754 generic.go:334] "Generic (PLEG): container finished" podID="7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" containerID="1667a83dd0bd774fa45f936bce2071271fa13651a7bf0572cb06954698f8af0b" exitCode=0 Jan 05 21:01:04 crc kubenswrapper[4754]: I0105 21:01:04.501498 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460781-ln2tm" event={"ID":"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69","Type":"ContainerDied","Data":"1667a83dd0bd774fa45f936bce2071271fa13651a7bf0572cb06954698f8af0b"} Jan 05 21:01:05 crc kubenswrapper[4754]: I0105 21:01:05.966066 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.096690 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys\") pod \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.096755 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle\") pod \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.096830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49sh5\" (UniqueName: \"kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5\") pod \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.096997 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data\") pod \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\" (UID: \"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69\") " Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.105557 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5" (OuterVolumeSpecName: "kube-api-access-49sh5") pod "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" (UID: "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69"). InnerVolumeSpecName "kube-api-access-49sh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.107578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" (UID: "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.155510 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" (UID: "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.200724 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.200762 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.200775 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49sh5\" (UniqueName: \"kubernetes.io/projected/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-kube-api-access-49sh5\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.209633 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data" (OuterVolumeSpecName: "config-data") pod "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" (UID: "7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.320076 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.539733 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460781-ln2tm" event={"ID":"7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69","Type":"ContainerDied","Data":"5544601f3368cb3e6b6b3b34acfe5e1226e8b589b3e71619fd96ae72ee24f42a"} Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.540785 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5544601f3368cb3e6b6b3b34acfe5e1226e8b589b3e71619fd96ae72ee24f42a" Jan 05 21:01:06 crc kubenswrapper[4754]: I0105 21:01:06.541108 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460781-ln2tm" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.075478 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:14 crc kubenswrapper[4754]: E0105 21:01:14.076942 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" containerName="keystone-cron" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.076967 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" containerName="keystone-cron" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.077417 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69" containerName="keystone-cron" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.083001 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.103025 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.266175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtf2\" (UniqueName: \"kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.266600 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.266828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.368986 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtf2\" (UniqueName: \"kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.369426 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.369510 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.369992 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.370046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.389351 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtf2\" (UniqueName: \"kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2\") pod \"community-operators-j7z7j\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:14 crc kubenswrapper[4754]: I0105 21:01:14.420900 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:15 crc kubenswrapper[4754]: I0105 21:01:15.005694 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:15 crc kubenswrapper[4754]: I0105 21:01:15.694074 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d3305db-ec88-482b-ad76-b7799540be81" containerID="48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9" exitCode=0 Jan 05 21:01:15 crc kubenswrapper[4754]: I0105 21:01:15.694123 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerDied","Data":"48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9"} Jan 05 21:01:15 crc kubenswrapper[4754]: I0105 21:01:15.694402 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerStarted","Data":"8ed0d1b2cfb86c9e830c335b7edc36c5e7af4a5ccf567d07728d916587ddcce8"} Jan 05 21:01:17 crc kubenswrapper[4754]: I0105 21:01:17.734605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerStarted","Data":"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23"} Jan 05 21:01:18 crc kubenswrapper[4754]: I0105 21:01:18.110044 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:01:18 crc kubenswrapper[4754]: I0105 21:01:18.110455 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:01:18 crc kubenswrapper[4754]: I0105 21:01:18.773146 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d3305db-ec88-482b-ad76-b7799540be81" containerID="21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23" exitCode=0 Jan 05 21:01:18 crc kubenswrapper[4754]: I0105 21:01:18.773210 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerDied","Data":"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23"} Jan 05 21:01:19 crc kubenswrapper[4754]: I0105 21:01:19.785033 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerStarted","Data":"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345"} Jan 05 21:01:19 crc kubenswrapper[4754]: I0105 21:01:19.805277 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j7z7j" podStartSLOduration=2.04905373 podStartE2EDuration="5.805262001s" podCreationTimestamp="2026-01-05 21:01:14 +0000 UTC" firstStartedPulling="2026-01-05 21:01:15.699772683 +0000 UTC m=+3362.408956567" lastFinishedPulling="2026-01-05 21:01:19.455980924 +0000 UTC m=+3366.165164838" observedRunningTime="2026-01-05 21:01:19.80067009 +0000 UTC m=+3366.509853964" watchObservedRunningTime="2026-01-05 21:01:19.805262001 +0000 UTC m=+3366.514445875" Jan 05 21:01:24 crc kubenswrapper[4754]: I0105 21:01:24.421784 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:24 crc kubenswrapper[4754]: I0105 21:01:24.422614 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:24 crc kubenswrapper[4754]: I0105 21:01:24.519188 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:24 crc kubenswrapper[4754]: I0105 21:01:24.919458 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:24 crc kubenswrapper[4754]: I0105 21:01:24.982722 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:26 crc kubenswrapper[4754]: I0105 21:01:26.875964 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j7z7j" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="registry-server" containerID="cri-o://5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345" gracePeriod=2 Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.523582 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.655457 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content\") pod \"9d3305db-ec88-482b-ad76-b7799540be81\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.655680 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities\") pod \"9d3305db-ec88-482b-ad76-b7799540be81\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.655724 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtf2\" (UniqueName: \"kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2\") pod \"9d3305db-ec88-482b-ad76-b7799540be81\" (UID: \"9d3305db-ec88-482b-ad76-b7799540be81\") " Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.657541 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities" (OuterVolumeSpecName: "utilities") pod "9d3305db-ec88-482b-ad76-b7799540be81" (UID: "9d3305db-ec88-482b-ad76-b7799540be81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.671524 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2" (OuterVolumeSpecName: "kube-api-access-cxtf2") pod "9d3305db-ec88-482b-ad76-b7799540be81" (UID: "9d3305db-ec88-482b-ad76-b7799540be81"). InnerVolumeSpecName "kube-api-access-cxtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.745633 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d3305db-ec88-482b-ad76-b7799540be81" (UID: "9d3305db-ec88-482b-ad76-b7799540be81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.758917 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.758961 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d3305db-ec88-482b-ad76-b7799540be81-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.758974 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxtf2\" (UniqueName: \"kubernetes.io/projected/9d3305db-ec88-482b-ad76-b7799540be81-kube-api-access-cxtf2\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.891572 4754 generic.go:334] "Generic (PLEG): container finished" podID="9d3305db-ec88-482b-ad76-b7799540be81" containerID="5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345" exitCode=0 Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.891621 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerDied","Data":"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345"} Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.891654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7z7j" event={"ID":"9d3305db-ec88-482b-ad76-b7799540be81","Type":"ContainerDied","Data":"8ed0d1b2cfb86c9e830c335b7edc36c5e7af4a5ccf567d07728d916587ddcce8"} Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.891677 4754 scope.go:117] "RemoveContainer" containerID="5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.891727 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7z7j" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.925593 4754 scope.go:117] "RemoveContainer" containerID="21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.952608 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.964505 4754 scope.go:117] "RemoveContainer" containerID="48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9" Jan 05 21:01:27 crc kubenswrapper[4754]: I0105 21:01:27.974177 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j7z7j"] Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.016021 4754 scope.go:117] "RemoveContainer" containerID="5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345" Jan 05 21:01:28 crc kubenswrapper[4754]: E0105 21:01:28.016627 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345\": container with ID starting with 5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345 not found: ID does not exist" containerID="5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345" Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.016656 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345"} err="failed to get container status \"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345\": rpc error: code = NotFound desc = could not find container \"5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345\": container with ID starting with 5bec396b06175b4679c99b1b398b7a30a6703f94e51a15152ad04aaffc99e345 not found: ID does not exist" Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.016676 4754 scope.go:117] "RemoveContainer" containerID="21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23" Jan 05 21:01:28 crc kubenswrapper[4754]: E0105 21:01:28.017913 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23\": container with ID starting with 21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23 not found: ID does not exist" containerID="21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23" Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.017945 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23"} err="failed to get container status \"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23\": rpc error: code = NotFound desc = could not find container \"21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23\": container with ID starting with 21d3d38e2c26434f97dbc9abc22cf6af1fdf78b9e7b9fb8f462270b439accc23 not found: ID does not exist" Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.017966 4754 scope.go:117] "RemoveContainer" containerID="48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9" Jan 05 21:01:28 crc kubenswrapper[4754]: E0105 21:01:28.018328 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9\": container with ID starting with 48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9 not found: ID does not exist" containerID="48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9" Jan 05 21:01:28 crc kubenswrapper[4754]: I0105 21:01:28.018354 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9"} err="failed to get container status \"48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9\": rpc error: code = NotFound desc = could not find container \"48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9\": container with ID starting with 48d839058f34ad42033404405f717b24645bfc751485a0000070c3c844ae66e9 not found: ID does not exist" Jan 05 21:01:29 crc kubenswrapper[4754]: I0105 21:01:29.613504 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3305db-ec88-482b-ad76-b7799540be81" path="/var/lib/kubelet/pods/9d3305db-ec88-482b-ad76-b7799540be81/volumes" Jan 05 21:01:48 crc kubenswrapper[4754]: I0105 21:01:48.109609 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:01:48 crc kubenswrapper[4754]: I0105 21:01:48.110624 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:01:52 crc kubenswrapper[4754]: I0105 21:01:52.566694 4754 generic.go:334] "Generic (PLEG): container finished" podID="e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" containerID="2d9e1316575d76a9b64ce1a60e897ed66ecfb0119ad9888937a878a9cddde3a5" exitCode=0 Jan 05 21:01:52 crc kubenswrapper[4754]: I0105 21:01:52.566781 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" event={"ID":"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25","Type":"ContainerDied","Data":"2d9e1316575d76a9b64ce1a60e897ed66ecfb0119ad9888937a878a9cddde3a5"} Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.126074 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.171635 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.172430 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.224233 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.227392 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory" (OuterVolumeSpecName: "inventory") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.275062 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.275113 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.275321 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.275386 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.275433 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g54cc\" (UniqueName: \"kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc\") pod \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\" (UID: \"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25\") " Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.277100 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.277124 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.278928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc" (OuterVolumeSpecName: "kube-api-access-g54cc") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "kube-api-access-g54cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.281536 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.311401 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.312014 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.322099 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" (UID: "e76d36bb-8a03-46b6-82b1-1bcbdcbbda25"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.381159 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.381633 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.381661 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.381676 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.381692 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g54cc\" (UniqueName: \"kubernetes.io/projected/e76d36bb-8a03-46b6-82b1-1bcbdcbbda25-kube-api-access-g54cc\") on node \"crc\" DevicePath \"\"" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.595982 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" event={"ID":"e76d36bb-8a03-46b6-82b1-1bcbdcbbda25","Type":"ContainerDied","Data":"651588dc96f0edaaec80cc519f176ebdc1414c7b8f31e75c979adccf5ad0c33a"} Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.596031 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="651588dc96f0edaaec80cc519f176ebdc1414c7b8f31e75c979adccf5ad0c33a" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.596053 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.723860 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj"] Jan 05 21:01:54 crc kubenswrapper[4754]: E0105 21:01:54.724562 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724583 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 05 21:01:54 crc kubenswrapper[4754]: E0105 21:01:54.724601 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="extract-content" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724608 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="extract-content" Jan 05 21:01:54 crc kubenswrapper[4754]: E0105 21:01:54.724625 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="extract-utilities" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724631 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="extract-utilities" Jan 05 21:01:54 crc kubenswrapper[4754]: E0105 21:01:54.724652 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="registry-server" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724659 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="registry-server" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724932 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76d36bb-8a03-46b6-82b1-1bcbdcbbda25" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.724946 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3305db-ec88-482b-ad76-b7799540be81" containerName="registry-server" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.725724 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.728233 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.728553 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.728757 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.728873 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.729196 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xdt6h" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.770655 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj"] Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.893335 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.893378 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xfq\" (UniqueName: \"kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.893462 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.893482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.893520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.997114 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.997204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.997279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.997594 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:54 crc kubenswrapper[4754]: I0105 21:01:54.997636 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xfq\" (UniqueName: \"kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.001825 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.004601 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.006508 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.015412 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.015795 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xfq\" (UniqueName: \"kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq\") pod \"logging-edpm-deployment-openstack-edpm-ipam-kpmfj\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.048234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:01:55 crc kubenswrapper[4754]: I0105 21:01:55.666741 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj"] Jan 05 21:01:56 crc kubenswrapper[4754]: I0105 21:01:56.628815 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" event={"ID":"09367f79-31eb-4f9f-a630-fb1dbbfd4e39","Type":"ContainerStarted","Data":"b0afd08c4fb4a36d4cce07f09f34a7ea7da6acfa10de38f749c47249d1067cb7"} Jan 05 21:01:57 crc kubenswrapper[4754]: I0105 21:01:57.644285 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" event={"ID":"09367f79-31eb-4f9f-a630-fb1dbbfd4e39","Type":"ContainerStarted","Data":"65092a383af742ab7c90fa1989182644575c0582d4937a0f48ab7a76f67dd3ca"} Jan 05 21:01:57 crc kubenswrapper[4754]: I0105 21:01:57.685928 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" podStartSLOduration=3.012898331 podStartE2EDuration="3.685900653s" podCreationTimestamp="2026-01-05 21:01:54 +0000 UTC" firstStartedPulling="2026-01-05 21:01:55.669933427 +0000 UTC m=+3402.379117311" lastFinishedPulling="2026-01-05 21:01:56.342935729 +0000 UTC m=+3403.052119633" observedRunningTime="2026-01-05 21:01:57.668208746 +0000 UTC m=+3404.377392640" watchObservedRunningTime="2026-01-05 21:01:57.685900653 +0000 UTC m=+3404.395084547" Jan 05 21:02:14 crc kubenswrapper[4754]: I0105 21:02:14.894038 4754 generic.go:334] "Generic (PLEG): container finished" podID="09367f79-31eb-4f9f-a630-fb1dbbfd4e39" containerID="65092a383af742ab7c90fa1989182644575c0582d4937a0f48ab7a76f67dd3ca" exitCode=0 Jan 05 21:02:14 crc kubenswrapper[4754]: I0105 21:02:14.894155 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" event={"ID":"09367f79-31eb-4f9f-a630-fb1dbbfd4e39","Type":"ContainerDied","Data":"65092a383af742ab7c90fa1989182644575c0582d4937a0f48ab7a76f67dd3ca"} Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.461935 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.595824 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key\") pod \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.596059 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory\") pod \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.596209 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8xfq\" (UniqueName: \"kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq\") pod \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.596276 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0\") pod \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.596337 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1\") pod \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\" (UID: \"09367f79-31eb-4f9f-a630-fb1dbbfd4e39\") " Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.603791 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq" (OuterVolumeSpecName: "kube-api-access-b8xfq") pod "09367f79-31eb-4f9f-a630-fb1dbbfd4e39" (UID: "09367f79-31eb-4f9f-a630-fb1dbbfd4e39"). InnerVolumeSpecName "kube-api-access-b8xfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.631467 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "09367f79-31eb-4f9f-a630-fb1dbbfd4e39" (UID: "09367f79-31eb-4f9f-a630-fb1dbbfd4e39"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.652878 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09367f79-31eb-4f9f-a630-fb1dbbfd4e39" (UID: "09367f79-31eb-4f9f-a630-fb1dbbfd4e39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.658260 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "09367f79-31eb-4f9f-a630-fb1dbbfd4e39" (UID: "09367f79-31eb-4f9f-a630-fb1dbbfd4e39"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.659138 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory" (OuterVolumeSpecName: "inventory") pod "09367f79-31eb-4f9f-a630-fb1dbbfd4e39" (UID: "09367f79-31eb-4f9f-a630-fb1dbbfd4e39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.699606 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.699643 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8xfq\" (UniqueName: \"kubernetes.io/projected/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-kube-api-access-b8xfq\") on node \"crc\" DevicePath \"\"" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.699661 4754 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.699674 4754 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.699688 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09367f79-31eb-4f9f-a630-fb1dbbfd4e39-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.926935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" event={"ID":"09367f79-31eb-4f9f-a630-fb1dbbfd4e39","Type":"ContainerDied","Data":"b0afd08c4fb4a36d4cce07f09f34a7ea7da6acfa10de38f749c47249d1067cb7"} Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.926982 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0afd08c4fb4a36d4cce07f09f34a7ea7da6acfa10de38f749c47249d1067cb7" Jan 05 21:02:16 crc kubenswrapper[4754]: I0105 21:02:16.927068 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-kpmfj" Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.109113 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.109526 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.109592 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.110974 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.111085 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef" gracePeriod=600 Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.964940 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef" exitCode=0 Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.965348 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef"} Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.965373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860"} Jan 05 21:02:18 crc kubenswrapper[4754]: I0105 21:02:18.965387 4754 scope.go:117] "RemoveContainer" containerID="526b0024b17a667d6699c7772274dcfecf339561520552eaa447d0a886490a5d" Jan 05 21:04:18 crc kubenswrapper[4754]: I0105 21:04:18.109841 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:04:18 crc kubenswrapper[4754]: I0105 21:04:18.110652 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:04:48 crc kubenswrapper[4754]: I0105 21:04:48.108895 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:04:48 crc kubenswrapper[4754]: I0105 21:04:48.109484 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:05:13 crc kubenswrapper[4754]: I0105 21:05:13.006725 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6ff666bbf9-t252x" podUID="2ce48436-7086-4501-9b9d-952b965fb028" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.109950 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.110563 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.110620 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.112015 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.112118 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" gracePeriod=600 Jan 05 21:05:18 crc kubenswrapper[4754]: E0105 21:05:18.244030 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.751282 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" exitCode=0 Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.751388 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860"} Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.751665 4754 scope.go:117] "RemoveContainer" containerID="4e81ca8d8deccf633779c053716ddb749d0fb72396723c09a79ff6f4c7a247ef" Jan 05 21:05:18 crc kubenswrapper[4754]: I0105 21:05:18.752646 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:05:18 crc kubenswrapper[4754]: E0105 21:05:18.753115 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:05:32 crc kubenswrapper[4754]: I0105 21:05:32.589621 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:05:32 crc kubenswrapper[4754]: E0105 21:05:32.590453 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:05:45 crc kubenswrapper[4754]: I0105 21:05:45.588541 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:05:45 crc kubenswrapper[4754]: E0105 21:05:45.589490 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:05:58 crc kubenswrapper[4754]: I0105 21:05:58.588159 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:05:58 crc kubenswrapper[4754]: E0105 21:05:58.588876 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.845958 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:05:59 crc kubenswrapper[4754]: E0105 21:05:59.846891 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09367f79-31eb-4f9f-a630-fb1dbbfd4e39" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.846907 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="09367f79-31eb-4f9f-a630-fb1dbbfd4e39" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.847538 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="09367f79-31eb-4f9f-a630-fb1dbbfd4e39" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.855414 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.865259 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.961158 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xlr\" (UniqueName: \"kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.961239 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:05:59 crc kubenswrapper[4754]: I0105 21:05:59.961410 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.063603 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xlr\" (UniqueName: \"kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.063684 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.063747 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.064214 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.064334 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.085529 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xlr\" (UniqueName: \"kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr\") pod \"certified-operators-bxm6v\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.191535 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:00 crc kubenswrapper[4754]: I0105 21:06:00.704366 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:06:00 crc kubenswrapper[4754]: W0105 21:06:00.706087 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5ddf70f_1531_47ff_bb96_02b5bf7639e5.slice/crio-4b211fab938ec4e708d80f8cc46f3361031044022530fae3e29260cc0ec52fb6 WatchSource:0}: Error finding container 4b211fab938ec4e708d80f8cc46f3361031044022530fae3e29260cc0ec52fb6: Status 404 returned error can't find the container with id 4b211fab938ec4e708d80f8cc46f3361031044022530fae3e29260cc0ec52fb6 Jan 05 21:06:01 crc kubenswrapper[4754]: I0105 21:06:01.285096 4754 generic.go:334] "Generic (PLEG): container finished" podID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerID="cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce" exitCode=0 Jan 05 21:06:01 crc kubenswrapper[4754]: I0105 21:06:01.285198 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerDied","Data":"cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce"} Jan 05 21:06:01 crc kubenswrapper[4754]: I0105 21:06:01.285560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerStarted","Data":"4b211fab938ec4e708d80f8cc46f3361031044022530fae3e29260cc0ec52fb6"} Jan 05 21:06:01 crc kubenswrapper[4754]: I0105 21:06:01.287920 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.231723 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.239349 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.251505 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.321995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.323144 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.323424 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7n5h\" (UniqueName: \"kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.425063 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.425283 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.425354 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7n5h\" (UniqueName: \"kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.425912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.426151 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.445246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7n5h\" (UniqueName: \"kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h\") pod \"redhat-marketplace-x9djk\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:02 crc kubenswrapper[4754]: I0105 21:06:02.595922 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:03 crc kubenswrapper[4754]: I0105 21:06:03.117887 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:03 crc kubenswrapper[4754]: W0105 21:06:03.117891 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55be0c81_4005_485e_b192_0ed393f34cb1.slice/crio-513d2389e341294dbff7d6b836c1f35a7ea7ae6d0c3cbfd152e36f156d6b9007 WatchSource:0}: Error finding container 513d2389e341294dbff7d6b836c1f35a7ea7ae6d0c3cbfd152e36f156d6b9007: Status 404 returned error can't find the container with id 513d2389e341294dbff7d6b836c1f35a7ea7ae6d0c3cbfd152e36f156d6b9007 Jan 05 21:06:03 crc kubenswrapper[4754]: I0105 21:06:03.308572 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerStarted","Data":"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de"} Jan 05 21:06:03 crc kubenswrapper[4754]: I0105 21:06:03.311379 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerStarted","Data":"513d2389e341294dbff7d6b836c1f35a7ea7ae6d0c3cbfd152e36f156d6b9007"} Jan 05 21:06:04 crc kubenswrapper[4754]: I0105 21:06:04.335930 4754 generic.go:334] "Generic (PLEG): container finished" podID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerID="2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de" exitCode=0 Jan 05 21:06:04 crc kubenswrapper[4754]: I0105 21:06:04.335971 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerDied","Data":"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de"} Jan 05 21:06:04 crc kubenswrapper[4754]: I0105 21:06:04.340243 4754 generic.go:334] "Generic (PLEG): container finished" podID="55be0c81-4005-485e-b192-0ed393f34cb1" containerID="72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59" exitCode=0 Jan 05 21:06:04 crc kubenswrapper[4754]: I0105 21:06:04.340349 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerDied","Data":"72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59"} Jan 05 21:06:05 crc kubenswrapper[4754]: I0105 21:06:05.354549 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerStarted","Data":"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d"} Jan 05 21:06:05 crc kubenswrapper[4754]: I0105 21:06:05.383997 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bxm6v" podStartSLOduration=2.790846494 podStartE2EDuration="6.383975665s" podCreationTimestamp="2026-01-05 21:05:59 +0000 UTC" firstStartedPulling="2026-01-05 21:06:01.287708427 +0000 UTC m=+3647.996892301" lastFinishedPulling="2026-01-05 21:06:04.880837598 +0000 UTC m=+3651.590021472" observedRunningTime="2026-01-05 21:06:05.372384939 +0000 UTC m=+3652.081568823" watchObservedRunningTime="2026-01-05 21:06:05.383975665 +0000 UTC m=+3652.093159549" Jan 05 21:06:06 crc kubenswrapper[4754]: I0105 21:06:06.372776 4754 generic.go:334] "Generic (PLEG): container finished" podID="55be0c81-4005-485e-b192-0ed393f34cb1" containerID="e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb" exitCode=0 Jan 05 21:06:06 crc kubenswrapper[4754]: I0105 21:06:06.372826 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerDied","Data":"e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb"} Jan 05 21:06:07 crc kubenswrapper[4754]: I0105 21:06:07.391466 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerStarted","Data":"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492"} Jan 05 21:06:07 crc kubenswrapper[4754]: I0105 21:06:07.411809 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x9djk" podStartSLOduration=2.967031612 podStartE2EDuration="5.411778297s" podCreationTimestamp="2026-01-05 21:06:02 +0000 UTC" firstStartedPulling="2026-01-05 21:06:04.343156098 +0000 UTC m=+3651.052340012" lastFinishedPulling="2026-01-05 21:06:06.787902813 +0000 UTC m=+3653.497086697" observedRunningTime="2026-01-05 21:06:07.409054406 +0000 UTC m=+3654.118238320" watchObservedRunningTime="2026-01-05 21:06:07.411778297 +0000 UTC m=+3654.120962181" Jan 05 21:06:10 crc kubenswrapper[4754]: I0105 21:06:10.193389 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:10 crc kubenswrapper[4754]: I0105 21:06:10.193761 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:11 crc kubenswrapper[4754]: I0105 21:06:11.263365 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bxm6v" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="registry-server" probeResult="failure" output=< Jan 05 21:06:11 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:06:11 crc kubenswrapper[4754]: > Jan 05 21:06:12 crc kubenswrapper[4754]: I0105 21:06:12.596886 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:12 crc kubenswrapper[4754]: I0105 21:06:12.597343 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:12 crc kubenswrapper[4754]: I0105 21:06:12.695367 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:13 crc kubenswrapper[4754]: I0105 21:06:13.566794 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:13 crc kubenswrapper[4754]: I0105 21:06:13.589241 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:06:13 crc kubenswrapper[4754]: E0105 21:06:13.589989 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:06:13 crc kubenswrapper[4754]: I0105 21:06:13.645822 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:15 crc kubenswrapper[4754]: I0105 21:06:15.511545 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x9djk" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="registry-server" containerID="cri-o://061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492" gracePeriod=2 Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.060820 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.158668 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities\") pod \"55be0c81-4005-485e-b192-0ed393f34cb1\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.159016 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content\") pod \"55be0c81-4005-485e-b192-0ed393f34cb1\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.159123 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7n5h\" (UniqueName: \"kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h\") pod \"55be0c81-4005-485e-b192-0ed393f34cb1\" (UID: \"55be0c81-4005-485e-b192-0ed393f34cb1\") " Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.159572 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities" (OuterVolumeSpecName: "utilities") pod "55be0c81-4005-485e-b192-0ed393f34cb1" (UID: "55be0c81-4005-485e-b192-0ed393f34cb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.160093 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.179982 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55be0c81-4005-485e-b192-0ed393f34cb1" (UID: "55be0c81-4005-485e-b192-0ed393f34cb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.181602 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h" (OuterVolumeSpecName: "kube-api-access-g7n5h") pod "55be0c81-4005-485e-b192-0ed393f34cb1" (UID: "55be0c81-4005-485e-b192-0ed393f34cb1"). InnerVolumeSpecName "kube-api-access-g7n5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.261997 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7n5h\" (UniqueName: \"kubernetes.io/projected/55be0c81-4005-485e-b192-0ed393f34cb1-kube-api-access-g7n5h\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.262035 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55be0c81-4005-485e-b192-0ed393f34cb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.524795 4754 generic.go:334] "Generic (PLEG): container finished" podID="55be0c81-4005-485e-b192-0ed393f34cb1" containerID="061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492" exitCode=0 Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.524837 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerDied","Data":"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492"} Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.524868 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9djk" event={"ID":"55be0c81-4005-485e-b192-0ed393f34cb1","Type":"ContainerDied","Data":"513d2389e341294dbff7d6b836c1f35a7ea7ae6d0c3cbfd152e36f156d6b9007"} Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.524895 4754 scope.go:117] "RemoveContainer" containerID="061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.524909 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9djk" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.548176 4754 scope.go:117] "RemoveContainer" containerID="e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.578635 4754 scope.go:117] "RemoveContainer" containerID="72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.583358 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.596974 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9djk"] Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.632793 4754 scope.go:117] "RemoveContainer" containerID="061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492" Jan 05 21:06:16 crc kubenswrapper[4754]: E0105 21:06:16.633589 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492\": container with ID starting with 061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492 not found: ID does not exist" containerID="061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.633660 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492"} err="failed to get container status \"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492\": rpc error: code = NotFound desc = could not find container \"061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492\": container with ID starting with 061e8dd7f5abf597c59f5d9e5577668500549eb5cdc4ff5f71b9cedd69f71492 not found: ID does not exist" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.633710 4754 scope.go:117] "RemoveContainer" containerID="e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb" Jan 05 21:06:16 crc kubenswrapper[4754]: E0105 21:06:16.634344 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb\": container with ID starting with e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb not found: ID does not exist" containerID="e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.634389 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb"} err="failed to get container status \"e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb\": rpc error: code = NotFound desc = could not find container \"e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb\": container with ID starting with e18c0d467c185be0f3245a16d974fee7366b12da666517bb91ca008ccd15d3bb not found: ID does not exist" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.634432 4754 scope.go:117] "RemoveContainer" containerID="72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59" Jan 05 21:06:16 crc kubenswrapper[4754]: E0105 21:06:16.637912 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59\": container with ID starting with 72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59 not found: ID does not exist" containerID="72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59" Jan 05 21:06:16 crc kubenswrapper[4754]: I0105 21:06:16.638077 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59"} err="failed to get container status \"72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59\": rpc error: code = NotFound desc = could not find container \"72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59\": container with ID starting with 72df8b0b9465ecdf048701720167cf7a27a207d5fc6a440e794329c4c4e74a59 not found: ID does not exist" Jan 05 21:06:17 crc kubenswrapper[4754]: I0105 21:06:17.611984 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" path="/var/lib/kubelet/pods/55be0c81-4005-485e-b192-0ed393f34cb1/volumes" Jan 05 21:06:20 crc kubenswrapper[4754]: I0105 21:06:20.276120 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:20 crc kubenswrapper[4754]: I0105 21:06:20.350545 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:20 crc kubenswrapper[4754]: I0105 21:06:20.522002 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:06:21 crc kubenswrapper[4754]: I0105 21:06:21.588581 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bxm6v" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="registry-server" containerID="cri-o://d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d" gracePeriod=2 Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.078555 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.132988 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79xlr\" (UniqueName: \"kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr\") pod \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.133394 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities\") pod \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.133528 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content\") pod \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\" (UID: \"f5ddf70f-1531-47ff-bb96-02b5bf7639e5\") " Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.134003 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities" (OuterVolumeSpecName: "utilities") pod "f5ddf70f-1531-47ff-bb96-02b5bf7639e5" (UID: "f5ddf70f-1531-47ff-bb96-02b5bf7639e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.134415 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.138693 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr" (OuterVolumeSpecName: "kube-api-access-79xlr") pod "f5ddf70f-1531-47ff-bb96-02b5bf7639e5" (UID: "f5ddf70f-1531-47ff-bb96-02b5bf7639e5"). InnerVolumeSpecName "kube-api-access-79xlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.179103 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ddf70f-1531-47ff-bb96-02b5bf7639e5" (UID: "f5ddf70f-1531-47ff-bb96-02b5bf7639e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.236244 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.236281 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79xlr\" (UniqueName: \"kubernetes.io/projected/f5ddf70f-1531-47ff-bb96-02b5bf7639e5-kube-api-access-79xlr\") on node \"crc\" DevicePath \"\"" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.603602 4754 generic.go:334] "Generic (PLEG): container finished" podID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerID="d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d" exitCode=0 Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.603656 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerDied","Data":"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d"} Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.603693 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxm6v" event={"ID":"f5ddf70f-1531-47ff-bb96-02b5bf7639e5","Type":"ContainerDied","Data":"4b211fab938ec4e708d80f8cc46f3361031044022530fae3e29260cc0ec52fb6"} Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.603688 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxm6v" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.603718 4754 scope.go:117] "RemoveContainer" containerID="d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.644803 4754 scope.go:117] "RemoveContainer" containerID="2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.660516 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.675604 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bxm6v"] Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.681996 4754 scope.go:117] "RemoveContainer" containerID="cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.743584 4754 scope.go:117] "RemoveContainer" containerID="d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d" Jan 05 21:06:22 crc kubenswrapper[4754]: E0105 21:06:22.744054 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d\": container with ID starting with d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d not found: ID does not exist" containerID="d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.744120 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d"} err="failed to get container status \"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d\": rpc error: code = NotFound desc = could not find container \"d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d\": container with ID starting with d0a4efa86c65e1eda66fad6f13d3e73a404aa5847520ddee59ed4aef6e86ef1d not found: ID does not exist" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.744154 4754 scope.go:117] "RemoveContainer" containerID="2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de" Jan 05 21:06:22 crc kubenswrapper[4754]: E0105 21:06:22.744672 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de\": container with ID starting with 2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de not found: ID does not exist" containerID="2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.744926 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de"} err="failed to get container status \"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de\": rpc error: code = NotFound desc = could not find container \"2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de\": container with ID starting with 2f9bb6b6f6f8b224d198940de3096154a96d2fdb1621cd7e18c39b41877a15de not found: ID does not exist" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.744994 4754 scope.go:117] "RemoveContainer" containerID="cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce" Jan 05 21:06:22 crc kubenswrapper[4754]: E0105 21:06:22.745536 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce\": container with ID starting with cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce not found: ID does not exist" containerID="cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce" Jan 05 21:06:22 crc kubenswrapper[4754]: I0105 21:06:22.745708 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce"} err="failed to get container status \"cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce\": rpc error: code = NotFound desc = could not find container \"cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce\": container with ID starting with cc8b9aa021019f00b2a496d1664a6e27c3e731cec7f55d78bfe9286682a038ce not found: ID does not exist" Jan 05 21:06:23 crc kubenswrapper[4754]: I0105 21:06:23.614724 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" path="/var/lib/kubelet/pods/f5ddf70f-1531-47ff-bb96-02b5bf7639e5/volumes" Jan 05 21:06:24 crc kubenswrapper[4754]: I0105 21:06:24.588947 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:06:24 crc kubenswrapper[4754]: E0105 21:06:24.589674 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:06:37 crc kubenswrapper[4754]: I0105 21:06:37.588974 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:06:37 crc kubenswrapper[4754]: E0105 21:06:37.590049 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:06:48 crc kubenswrapper[4754]: I0105 21:06:48.589822 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:06:48 crc kubenswrapper[4754]: E0105 21:06:48.591537 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:07:00 crc kubenswrapper[4754]: I0105 21:07:00.589313 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:07:00 crc kubenswrapper[4754]: E0105 21:07:00.590093 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:07:11 crc kubenswrapper[4754]: I0105 21:07:11.588456 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:07:11 crc kubenswrapper[4754]: E0105 21:07:11.589539 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:07:23 crc kubenswrapper[4754]: I0105 21:07:23.611790 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:07:23 crc kubenswrapper[4754]: E0105 21:07:23.613031 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:07:36 crc kubenswrapper[4754]: I0105 21:07:36.588752 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:07:36 crc kubenswrapper[4754]: E0105 21:07:36.589558 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:07:50 crc kubenswrapper[4754]: I0105 21:07:50.590771 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:07:50 crc kubenswrapper[4754]: E0105 21:07:50.592025 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:08:05 crc kubenswrapper[4754]: I0105 21:08:05.588854 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:08:05 crc kubenswrapper[4754]: E0105 21:08:05.589857 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:08:19 crc kubenswrapper[4754]: I0105 21:08:19.590403 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:08:19 crc kubenswrapper[4754]: E0105 21:08:19.591619 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:08:33 crc kubenswrapper[4754]: I0105 21:08:33.604813 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:08:33 crc kubenswrapper[4754]: E0105 21:08:33.605785 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:08:48 crc kubenswrapper[4754]: I0105 21:08:48.589534 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:08:48 crc kubenswrapper[4754]: E0105 21:08:48.591157 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:09:02 crc kubenswrapper[4754]: I0105 21:09:02.588430 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:09:02 crc kubenswrapper[4754]: E0105 21:09:02.589202 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.258238 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.260231 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="extract-content" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.260359 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="extract-content" Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.260476 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="extract-content" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.260559 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="extract-content" Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.260654 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="extract-utilities" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.260728 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="extract-utilities" Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.260870 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="extract-utilities" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.260950 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="extract-utilities" Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.261036 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.261108 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: E0105 21:09:07.261191 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.261283 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.261675 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ddf70f-1531-47ff-bb96-02b5bf7639e5" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.261765 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="55be0c81-4005-485e-b192-0ed393f34cb1" containerName="registry-server" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.265043 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.278095 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.278413 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhdm\" (UniqueName: \"kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.278561 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.291406 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.380899 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.381124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhdm\" (UniqueName: \"kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.381206 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.381521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.381779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.403569 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhdm\" (UniqueName: \"kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm\") pod \"redhat-operators-8cvgk\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:07 crc kubenswrapper[4754]: I0105 21:09:07.588989 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:08 crc kubenswrapper[4754]: I0105 21:09:08.160057 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:08 crc kubenswrapper[4754]: I0105 21:09:08.819716 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerStarted","Data":"2a3c591b12a9fd40259d38df52d93c1d7dae77651e4db2a8ce42770f602c11b7"} Jan 05 21:09:09 crc kubenswrapper[4754]: I0105 21:09:09.835696 4754 generic.go:334] "Generic (PLEG): container finished" podID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerID="71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0" exitCode=0 Jan 05 21:09:09 crc kubenswrapper[4754]: I0105 21:09:09.835789 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerDied","Data":"71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0"} Jan 05 21:09:11 crc kubenswrapper[4754]: I0105 21:09:11.863608 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerStarted","Data":"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36"} Jan 05 21:09:13 crc kubenswrapper[4754]: I0105 21:09:13.887335 4754 generic.go:334] "Generic (PLEG): container finished" podID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerID="fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36" exitCode=0 Jan 05 21:09:13 crc kubenswrapper[4754]: I0105 21:09:13.887663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerDied","Data":"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36"} Jan 05 21:09:14 crc kubenswrapper[4754]: I0105 21:09:14.899578 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerStarted","Data":"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd"} Jan 05 21:09:14 crc kubenswrapper[4754]: I0105 21:09:14.928717 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8cvgk" podStartSLOduration=3.481818048 podStartE2EDuration="7.92870036s" podCreationTimestamp="2026-01-05 21:09:07 +0000 UTC" firstStartedPulling="2026-01-05 21:09:09.838376296 +0000 UTC m=+3836.547560190" lastFinishedPulling="2026-01-05 21:09:14.285258618 +0000 UTC m=+3840.994442502" observedRunningTime="2026-01-05 21:09:14.924613981 +0000 UTC m=+3841.633797885" watchObservedRunningTime="2026-01-05 21:09:14.92870036 +0000 UTC m=+3841.637884234" Jan 05 21:09:17 crc kubenswrapper[4754]: I0105 21:09:17.589896 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:09:17 crc kubenswrapper[4754]: E0105 21:09:17.592480 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:09:17 crc kubenswrapper[4754]: I0105 21:09:17.603947 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:17 crc kubenswrapper[4754]: I0105 21:09:17.604022 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:18 crc kubenswrapper[4754]: I0105 21:09:18.653523 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8cvgk" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="registry-server" probeResult="failure" output=< Jan 05 21:09:18 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:09:18 crc kubenswrapper[4754]: > Jan 05 21:09:27 crc kubenswrapper[4754]: I0105 21:09:27.669396 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:27 crc kubenswrapper[4754]: I0105 21:09:27.738115 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:27 crc kubenswrapper[4754]: I0105 21:09:27.917121 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.037138 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8cvgk" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="registry-server" containerID="cri-o://1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd" gracePeriod=2 Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.665819 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.738045 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content\") pod \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.738163 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhdm\" (UniqueName: \"kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm\") pod \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.738245 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities\") pod \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\" (UID: \"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f\") " Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.760640 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm" (OuterVolumeSpecName: "kube-api-access-2zhdm") pod "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" (UID: "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f"). InnerVolumeSpecName "kube-api-access-2zhdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.802022 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities" (OuterVolumeSpecName: "utilities") pod "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" (UID: "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.841918 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zhdm\" (UniqueName: \"kubernetes.io/projected/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-kube-api-access-2zhdm\") on node \"crc\" DevicePath \"\"" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.843547 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.931822 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" (UID: "ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:09:29 crc kubenswrapper[4754]: I0105 21:09:29.946942 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.051161 4754 generic.go:334] "Generic (PLEG): container finished" podID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerID="1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd" exitCode=0 Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.051371 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerDied","Data":"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd"} Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.051546 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cvgk" event={"ID":"ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f","Type":"ContainerDied","Data":"2a3c591b12a9fd40259d38df52d93c1d7dae77651e4db2a8ce42770f602c11b7"} Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.051581 4754 scope.go:117] "RemoveContainer" containerID="1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.051806 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cvgk" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.090201 4754 scope.go:117] "RemoveContainer" containerID="fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.102912 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.121422 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8cvgk"] Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.129468 4754 scope.go:117] "RemoveContainer" containerID="71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.198037 4754 scope.go:117] "RemoveContainer" containerID="1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd" Jan 05 21:09:30 crc kubenswrapper[4754]: E0105 21:09:30.198574 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd\": container with ID starting with 1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd not found: ID does not exist" containerID="1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.198632 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd"} err="failed to get container status \"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd\": rpc error: code = NotFound desc = could not find container \"1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd\": container with ID starting with 1dab8d9fb82cebac35fa3e1bc869464fb4e2dcba21659fc3110fe7d18edefebd not found: ID does not exist" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.198666 4754 scope.go:117] "RemoveContainer" containerID="fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36" Jan 05 21:09:30 crc kubenswrapper[4754]: E0105 21:09:30.199039 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36\": container with ID starting with fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36 not found: ID does not exist" containerID="fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.199073 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36"} err="failed to get container status \"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36\": rpc error: code = NotFound desc = could not find container \"fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36\": container with ID starting with fc9b7ac3517298f386fc01b1e7f87efe4e0fd51c2c59f6af79142eac087c7d36 not found: ID does not exist" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.199095 4754 scope.go:117] "RemoveContainer" containerID="71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0" Jan 05 21:09:30 crc kubenswrapper[4754]: E0105 21:09:30.199359 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0\": container with ID starting with 71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0 not found: ID does not exist" containerID="71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.199389 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0"} err="failed to get container status \"71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0\": rpc error: code = NotFound desc = could not find container \"71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0\": container with ID starting with 71e0e1afc993607ddd1aa30a7e46581b67bbfd09dea9077ed4da37197bb401e0 not found: ID does not exist" Jan 05 21:09:30 crc kubenswrapper[4754]: I0105 21:09:30.588366 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:09:30 crc kubenswrapper[4754]: E0105 21:09:30.589067 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:09:31 crc kubenswrapper[4754]: I0105 21:09:31.603456 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" path="/var/lib/kubelet/pods/ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f/volumes" Jan 05 21:09:42 crc kubenswrapper[4754]: I0105 21:09:42.590675 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:09:42 crc kubenswrapper[4754]: E0105 21:09:42.591724 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:09:54 crc kubenswrapper[4754]: I0105 21:09:54.588811 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:09:54 crc kubenswrapper[4754]: E0105 21:09:54.590552 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:10:05 crc kubenswrapper[4754]: I0105 21:10:05.588757 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:10:05 crc kubenswrapper[4754]: E0105 21:10:05.589751 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:10:18 crc kubenswrapper[4754]: I0105 21:10:18.588682 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:10:19 crc kubenswrapper[4754]: I0105 21:10:19.691721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409"} Jan 05 21:12:18 crc kubenswrapper[4754]: I0105 21:12:18.109954 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:12:18 crc kubenswrapper[4754]: I0105 21:12:18.110583 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.245163 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:34 crc kubenswrapper[4754]: E0105 21:12:34.246338 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="registry-server" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.246365 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="registry-server" Jan 05 21:12:34 crc kubenswrapper[4754]: E0105 21:12:34.246434 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="extract-utilities" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.246449 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="extract-utilities" Jan 05 21:12:34 crc kubenswrapper[4754]: E0105 21:12:34.246473 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="extract-content" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.246484 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="extract-content" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.246995 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7d40c6-7d82-45a8-83d3-e7920ccd8d6f" containerName="registry-server" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.249967 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.260909 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.403928 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.404240 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.404611 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzdj\" (UniqueName: \"kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.506501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twzdj\" (UniqueName: \"kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.506616 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.506741 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.507187 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.507737 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.527644 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzdj\" (UniqueName: \"kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj\") pod \"community-operators-k8mvh\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:34 crc kubenswrapper[4754]: I0105 21:12:34.581563 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:35 crc kubenswrapper[4754]: I0105 21:12:35.159379 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:35 crc kubenswrapper[4754]: W0105 21:12:35.163810 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod764e4fbc_507a_4b8a_8717_a641c24c4b25.slice/crio-ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4 WatchSource:0}: Error finding container ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4: Status 404 returned error can't find the container with id ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4 Jan 05 21:12:35 crc kubenswrapper[4754]: I0105 21:12:35.504359 4754 generic.go:334] "Generic (PLEG): container finished" podID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerID="3fc9c0a1ed2ac47acc9846a7d45364727c872264d3eaccb8f60e64ded32b65dc" exitCode=0 Jan 05 21:12:35 crc kubenswrapper[4754]: I0105 21:12:35.504415 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerDied","Data":"3fc9c0a1ed2ac47acc9846a7d45364727c872264d3eaccb8f60e64ded32b65dc"} Jan 05 21:12:35 crc kubenswrapper[4754]: I0105 21:12:35.504446 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerStarted","Data":"ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4"} Jan 05 21:12:35 crc kubenswrapper[4754]: I0105 21:12:35.506809 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:12:36 crc kubenswrapper[4754]: I0105 21:12:36.520793 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerStarted","Data":"49e6f3a0d16571f84b42ff166c58e46d1921fa18734a7ce3727376c5ab261073"} Jan 05 21:12:37 crc kubenswrapper[4754]: I0105 21:12:37.534743 4754 generic.go:334] "Generic (PLEG): container finished" podID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerID="49e6f3a0d16571f84b42ff166c58e46d1921fa18734a7ce3727376c5ab261073" exitCode=0 Jan 05 21:12:37 crc kubenswrapper[4754]: I0105 21:12:37.534824 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerDied","Data":"49e6f3a0d16571f84b42ff166c58e46d1921fa18734a7ce3727376c5ab261073"} Jan 05 21:12:39 crc kubenswrapper[4754]: I0105 21:12:39.558998 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerStarted","Data":"0631e98b0aadf0c946a4a2b181c1a206225a0a9a05898bbaff863712e64bfc22"} Jan 05 21:12:39 crc kubenswrapper[4754]: I0105 21:12:39.589617 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8mvh" podStartSLOduration=2.879902055 podStartE2EDuration="5.589598337s" podCreationTimestamp="2026-01-05 21:12:34 +0000 UTC" firstStartedPulling="2026-01-05 21:12:35.506539923 +0000 UTC m=+4042.215723807" lastFinishedPulling="2026-01-05 21:12:38.216236205 +0000 UTC m=+4044.925420089" observedRunningTime="2026-01-05 21:12:39.577543297 +0000 UTC m=+4046.286727191" watchObservedRunningTime="2026-01-05 21:12:39.589598337 +0000 UTC m=+4046.298782211" Jan 05 21:12:44 crc kubenswrapper[4754]: I0105 21:12:44.582630 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:44 crc kubenswrapper[4754]: I0105 21:12:44.583493 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:44 crc kubenswrapper[4754]: I0105 21:12:44.681995 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:44 crc kubenswrapper[4754]: I0105 21:12:44.756616 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:44 crc kubenswrapper[4754]: I0105 21:12:44.928173 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:46 crc kubenswrapper[4754]: I0105 21:12:46.660557 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k8mvh" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="registry-server" containerID="cri-o://0631e98b0aadf0c946a4a2b181c1a206225a0a9a05898bbaff863712e64bfc22" gracePeriod=2 Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.672596 4754 generic.go:334] "Generic (PLEG): container finished" podID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerID="0631e98b0aadf0c946a4a2b181c1a206225a0a9a05898bbaff863712e64bfc22" exitCode=0 Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.672779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerDied","Data":"0631e98b0aadf0c946a4a2b181c1a206225a0a9a05898bbaff863712e64bfc22"} Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.672953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8mvh" event={"ID":"764e4fbc-507a-4b8a-8717-a641c24c4b25","Type":"ContainerDied","Data":"ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4"} Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.672973 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5ede13e76f786566d3e0467d09e89cefecf6227c6f4c4a4ad2ff40b062d8f4" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.755304 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.784331 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities\") pod \"764e4fbc-507a-4b8a-8717-a641c24c4b25\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.784451 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twzdj\" (UniqueName: \"kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj\") pod \"764e4fbc-507a-4b8a-8717-a641c24c4b25\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.784593 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content\") pod \"764e4fbc-507a-4b8a-8717-a641c24c4b25\" (UID: \"764e4fbc-507a-4b8a-8717-a641c24c4b25\") " Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.785138 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities" (OuterVolumeSpecName: "utilities") pod "764e4fbc-507a-4b8a-8717-a641c24c4b25" (UID: "764e4fbc-507a-4b8a-8717-a641c24c4b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.785661 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.790160 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj" (OuterVolumeSpecName: "kube-api-access-twzdj") pod "764e4fbc-507a-4b8a-8717-a641c24c4b25" (UID: "764e4fbc-507a-4b8a-8717-a641c24c4b25"). InnerVolumeSpecName "kube-api-access-twzdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.854866 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "764e4fbc-507a-4b8a-8717-a641c24c4b25" (UID: "764e4fbc-507a-4b8a-8717-a641c24c4b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.887828 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e4fbc-507a-4b8a-8717-a641c24c4b25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:12:47 crc kubenswrapper[4754]: I0105 21:12:47.887858 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twzdj\" (UniqueName: \"kubernetes.io/projected/764e4fbc-507a-4b8a-8717-a641c24c4b25-kube-api-access-twzdj\") on node \"crc\" DevicePath \"\"" Jan 05 21:12:48 crc kubenswrapper[4754]: I0105 21:12:48.108718 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:12:48 crc kubenswrapper[4754]: I0105 21:12:48.108770 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:12:48 crc kubenswrapper[4754]: I0105 21:12:48.689878 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8mvh" Jan 05 21:12:48 crc kubenswrapper[4754]: I0105 21:12:48.746061 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:48 crc kubenswrapper[4754]: I0105 21:12:48.757683 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k8mvh"] Jan 05 21:12:49 crc kubenswrapper[4754]: I0105 21:12:49.602402 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" path="/var/lib/kubelet/pods/764e4fbc-507a-4b8a-8717-a641c24c4b25/volumes" Jan 05 21:13:18 crc kubenswrapper[4754]: I0105 21:13:18.109245 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:13:18 crc kubenswrapper[4754]: I0105 21:13:18.110068 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:13:18 crc kubenswrapper[4754]: I0105 21:13:18.110158 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:13:18 crc kubenswrapper[4754]: I0105 21:13:18.111324 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:13:18 crc kubenswrapper[4754]: I0105 21:13:18.111458 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409" gracePeriod=600 Jan 05 21:13:19 crc kubenswrapper[4754]: I0105 21:13:19.071727 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409" exitCode=0 Jan 05 21:13:19 crc kubenswrapper[4754]: I0105 21:13:19.071800 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409"} Jan 05 21:13:19 crc kubenswrapper[4754]: I0105 21:13:19.072324 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c"} Jan 05 21:13:19 crc kubenswrapper[4754]: I0105 21:13:19.072351 4754 scope.go:117] "RemoveContainer" containerID="b5e375dbb1094d493794455a5e78e34fe40df46b793ea7d56c19d264a004f860" Jan 05 21:13:59 crc kubenswrapper[4754]: I0105 21:13:59.752511 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:13:59 crc kubenswrapper[4754]: I0105 21:13:59.754861 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.222767 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2"] Jan 05 21:15:00 crc kubenswrapper[4754]: E0105 21:15:00.224587 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="extract-utilities" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.224623 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="extract-utilities" Jan 05 21:15:00 crc kubenswrapper[4754]: E0105 21:15:00.224663 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="extract-content" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.224684 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="extract-content" Jan 05 21:15:00 crc kubenswrapper[4754]: E0105 21:15:00.224784 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="registry-server" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.224806 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="registry-server" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.225394 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="764e4fbc-507a-4b8a-8717-a641c24c4b25" containerName="registry-server" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.227380 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.230001 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.232841 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.247373 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2"] Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.351957 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.352465 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbmf\" (UniqueName: \"kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.352522 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.454699 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.454794 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbmf\" (UniqueName: \"kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.454828 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.455716 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.461938 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.475454 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbmf\" (UniqueName: \"kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf\") pod \"collect-profiles-29460795-hhdr2\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:00 crc kubenswrapper[4754]: I0105 21:15:00.565715 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:01 crc kubenswrapper[4754]: I0105 21:15:01.057156 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2"] Jan 05 21:15:01 crc kubenswrapper[4754]: W0105 21:15:01.062899 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec25e4d_020b_4093_a073_e1b79a4a8434.slice/crio-8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd WatchSource:0}: Error finding container 8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd: Status 404 returned error can't find the container with id 8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd Jan 05 21:15:01 crc kubenswrapper[4754]: I0105 21:15:01.287958 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" event={"ID":"7ec25e4d-020b-4093-a073-e1b79a4a8434","Type":"ContainerStarted","Data":"675a4d19bf04b7c968d78e2751b4261a8fb1750f74f75f43d6b79e483e7f7ab6"} Jan 05 21:15:01 crc kubenswrapper[4754]: I0105 21:15:01.288313 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" event={"ID":"7ec25e4d-020b-4093-a073-e1b79a4a8434","Type":"ContainerStarted","Data":"8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd"} Jan 05 21:15:01 crc kubenswrapper[4754]: I0105 21:15:01.304536 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" podStartSLOduration=1.304510501 podStartE2EDuration="1.304510501s" podCreationTimestamp="2026-01-05 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:15:01.30181681 +0000 UTC m=+4188.011000714" watchObservedRunningTime="2026-01-05 21:15:01.304510501 +0000 UTC m=+4188.013694395" Jan 05 21:15:02 crc kubenswrapper[4754]: I0105 21:15:02.305104 4754 generic.go:334] "Generic (PLEG): container finished" podID="7ec25e4d-020b-4093-a073-e1b79a4a8434" containerID="675a4d19bf04b7c968d78e2751b4261a8fb1750f74f75f43d6b79e483e7f7ab6" exitCode=0 Jan 05 21:15:02 crc kubenswrapper[4754]: I0105 21:15:02.305213 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" event={"ID":"7ec25e4d-020b-4093-a073-e1b79a4a8434","Type":"ContainerDied","Data":"675a4d19bf04b7c968d78e2751b4261a8fb1750f74f75f43d6b79e483e7f7ab6"} Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.797086 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.847782 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume\") pod \"7ec25e4d-020b-4093-a073-e1b79a4a8434\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.847911 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume\") pod \"7ec25e4d-020b-4093-a073-e1b79a4a8434\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.847937 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbmf\" (UniqueName: \"kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf\") pod \"7ec25e4d-020b-4093-a073-e1b79a4a8434\" (UID: \"7ec25e4d-020b-4093-a073-e1b79a4a8434\") " Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.848521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ec25e4d-020b-4093-a073-e1b79a4a8434" (UID: "7ec25e4d-020b-4093-a073-e1b79a4a8434"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.856456 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ec25e4d-020b-4093-a073-e1b79a4a8434" (UID: "7ec25e4d-020b-4093-a073-e1b79a4a8434"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.856466 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf" (OuterVolumeSpecName: "kube-api-access-dmbmf") pod "7ec25e4d-020b-4093-a073-e1b79a4a8434" (UID: "7ec25e4d-020b-4093-a073-e1b79a4a8434"). InnerVolumeSpecName "kube-api-access-dmbmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.950826 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec25e4d-020b-4093-a073-e1b79a4a8434-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.950857 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbmf\" (UniqueName: \"kubernetes.io/projected/7ec25e4d-020b-4093-a073-e1b79a4a8434-kube-api-access-dmbmf\") on node \"crc\" DevicePath \"\"" Jan 05 21:15:03 crc kubenswrapper[4754]: I0105 21:15:03.950867 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ec25e4d-020b-4093-a073-e1b79a4a8434-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:15:04 crc kubenswrapper[4754]: I0105 21:15:04.334594 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" event={"ID":"7ec25e4d-020b-4093-a073-e1b79a4a8434","Type":"ContainerDied","Data":"8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd"} Jan 05 21:15:04 crc kubenswrapper[4754]: I0105 21:15:04.334855 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8570046645b7527168f98588731d27777534fda19c9de23017e79e3fe5a1d3dd" Jan 05 21:15:04 crc kubenswrapper[4754]: I0105 21:15:04.334905 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2" Jan 05 21:15:04 crc kubenswrapper[4754]: I0105 21:15:04.402048 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg"] Jan 05 21:15:04 crc kubenswrapper[4754]: I0105 21:15:04.413956 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460750-bxcpg"] Jan 05 21:15:05 crc kubenswrapper[4754]: I0105 21:15:05.604531 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623e75ec-7533-4b8c-8869-bb4f48ff07a9" path="/var/lib/kubelet/pods/623e75ec-7533-4b8c-8869-bb4f48ff07a9/volumes" Jan 05 21:15:11 crc kubenswrapper[4754]: I0105 21:15:11.028389 4754 scope.go:117] "RemoveContainer" containerID="2fa8e0152a442e236ef3b1008e34222f49f57ad76f62aa8a4c44de8bba1ae353" Jan 05 21:15:18 crc kubenswrapper[4754]: I0105 21:15:18.110087 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:15:18 crc kubenswrapper[4754]: I0105 21:15:18.110685 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:15:48 crc kubenswrapper[4754]: I0105 21:15:48.109964 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:15:48 crc kubenswrapper[4754]: I0105 21:15:48.110524 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:16:18 crc kubenswrapper[4754]: I0105 21:16:18.109508 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:16:18 crc kubenswrapper[4754]: I0105 21:16:18.110052 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:16:18 crc kubenswrapper[4754]: I0105 21:16:18.110108 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:16:18 crc kubenswrapper[4754]: I0105 21:16:18.111515 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:16:18 crc kubenswrapper[4754]: I0105 21:16:18.111624 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" gracePeriod=600 Jan 05 21:16:18 crc kubenswrapper[4754]: E0105 21:16:18.238973 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:16:19 crc kubenswrapper[4754]: I0105 21:16:19.252857 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" exitCode=0 Jan 05 21:16:19 crc kubenswrapper[4754]: I0105 21:16:19.253183 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c"} Jan 05 21:16:19 crc kubenswrapper[4754]: I0105 21:16:19.253221 4754 scope.go:117] "RemoveContainer" containerID="d51245200948ef067819206c1ea3120d9d40555b79636a768b20c6e16eda2409" Jan 05 21:16:19 crc kubenswrapper[4754]: I0105 21:16:19.254170 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:16:19 crc kubenswrapper[4754]: E0105 21:16:19.254684 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:16:33 crc kubenswrapper[4754]: I0105 21:16:33.609246 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:16:33 crc kubenswrapper[4754]: E0105 21:16:33.610027 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:16:48 crc kubenswrapper[4754]: I0105 21:16:48.589660 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:16:48 crc kubenswrapper[4754]: E0105 21:16:48.590542 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:16:59 crc kubenswrapper[4754]: I0105 21:16:59.589275 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:16:59 crc kubenswrapper[4754]: E0105 21:16:59.590121 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:17:10 crc kubenswrapper[4754]: I0105 21:17:10.588495 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:17:10 crc kubenswrapper[4754]: E0105 21:17:10.589363 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.534664 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:19 crc kubenswrapper[4754]: E0105 21:17:19.535577 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec25e4d-020b-4093-a073-e1b79a4a8434" containerName="collect-profiles" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.535590 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec25e4d-020b-4093-a073-e1b79a4a8434" containerName="collect-profiles" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.535829 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec25e4d-020b-4093-a073-e1b79a4a8434" containerName="collect-profiles" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.537755 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.561977 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.704195 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.704583 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7gcq\" (UniqueName: \"kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.704626 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.806678 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.806980 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7gcq\" (UniqueName: \"kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.807118 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.807208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.807464 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.854417 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7gcq\" (UniqueName: \"kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq\") pod \"redhat-marketplace-c6w4v\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:19 crc kubenswrapper[4754]: I0105 21:17:19.860126 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:20 crc kubenswrapper[4754]: I0105 21:17:20.369987 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:21 crc kubenswrapper[4754]: I0105 21:17:21.048700 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerDied","Data":"a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e"} Jan 05 21:17:21 crc kubenswrapper[4754]: I0105 21:17:21.048511 4754 generic.go:334] "Generic (PLEG): container finished" podID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerID="a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e" exitCode=0 Jan 05 21:17:21 crc kubenswrapper[4754]: I0105 21:17:21.049652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerStarted","Data":"f256981f420a271162921326a9fbdabc469c08bc57d027fbf5f33ec69429eed5"} Jan 05 21:17:23 crc kubenswrapper[4754]: I0105 21:17:23.079360 4754 generic.go:334] "Generic (PLEG): container finished" podID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerID="614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c" exitCode=0 Jan 05 21:17:23 crc kubenswrapper[4754]: I0105 21:17:23.079447 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerDied","Data":"614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c"} Jan 05 21:17:23 crc kubenswrapper[4754]: I0105 21:17:23.600558 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:17:23 crc kubenswrapper[4754]: E0105 21:17:23.600940 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:17:24 crc kubenswrapper[4754]: I0105 21:17:24.091870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerStarted","Data":"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9"} Jan 05 21:17:24 crc kubenswrapper[4754]: I0105 21:17:24.116015 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c6w4v" podStartSLOduration=2.525167556 podStartE2EDuration="5.115994913s" podCreationTimestamp="2026-01-05 21:17:19 +0000 UTC" firstStartedPulling="2026-01-05 21:17:21.052123882 +0000 UTC m=+4327.761307806" lastFinishedPulling="2026-01-05 21:17:23.642951289 +0000 UTC m=+4330.352135163" observedRunningTime="2026-01-05 21:17:24.110119868 +0000 UTC m=+4330.819303742" watchObservedRunningTime="2026-01-05 21:17:24.115994913 +0000 UTC m=+4330.825178787" Jan 05 21:17:29 crc kubenswrapper[4754]: I0105 21:17:29.860740 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:29 crc kubenswrapper[4754]: I0105 21:17:29.861307 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:29 crc kubenswrapper[4754]: I0105 21:17:29.928414 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:30 crc kubenswrapper[4754]: I0105 21:17:30.230935 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:30 crc kubenswrapper[4754]: I0105 21:17:30.299582 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.193010 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c6w4v" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="registry-server" containerID="cri-o://e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9" gracePeriod=2 Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.834986 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.948327 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content\") pod \"ea8a40af-9f74-42d2-8d02-de6c5d776409\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.948647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7gcq\" (UniqueName: \"kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq\") pod \"ea8a40af-9f74-42d2-8d02-de6c5d776409\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.948759 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities\") pod \"ea8a40af-9f74-42d2-8d02-de6c5d776409\" (UID: \"ea8a40af-9f74-42d2-8d02-de6c5d776409\") " Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.951191 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities" (OuterVolumeSpecName: "utilities") pod "ea8a40af-9f74-42d2-8d02-de6c5d776409" (UID: "ea8a40af-9f74-42d2-8d02-de6c5d776409"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.961022 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq" (OuterVolumeSpecName: "kube-api-access-h7gcq") pod "ea8a40af-9f74-42d2-8d02-de6c5d776409" (UID: "ea8a40af-9f74-42d2-8d02-de6c5d776409"). InnerVolumeSpecName "kube-api-access-h7gcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:17:32 crc kubenswrapper[4754]: I0105 21:17:32.973855 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea8a40af-9f74-42d2-8d02-de6c5d776409" (UID: "ea8a40af-9f74-42d2-8d02-de6c5d776409"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.052191 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.052234 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7gcq\" (UniqueName: \"kubernetes.io/projected/ea8a40af-9f74-42d2-8d02-de6c5d776409-kube-api-access-h7gcq\") on node \"crc\" DevicePath \"\"" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.052248 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8a40af-9f74-42d2-8d02-de6c5d776409-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.209152 4754 generic.go:334] "Generic (PLEG): container finished" podID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerID="e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9" exitCode=0 Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.209213 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6w4v" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.209248 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerDied","Data":"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9"} Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.210860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6w4v" event={"ID":"ea8a40af-9f74-42d2-8d02-de6c5d776409","Type":"ContainerDied","Data":"f256981f420a271162921326a9fbdabc469c08bc57d027fbf5f33ec69429eed5"} Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.210885 4754 scope.go:117] "RemoveContainer" containerID="e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.248598 4754 scope.go:117] "RemoveContainer" containerID="614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.288419 4754 scope.go:117] "RemoveContainer" containerID="a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.288618 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.296609 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6w4v"] Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.359243 4754 scope.go:117] "RemoveContainer" containerID="e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9" Jan 05 21:17:33 crc kubenswrapper[4754]: E0105 21:17:33.359698 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9\": container with ID starting with e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9 not found: ID does not exist" containerID="e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.359763 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9"} err="failed to get container status \"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9\": rpc error: code = NotFound desc = could not find container \"e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9\": container with ID starting with e40b7a2949edfa716259eae49f947fdc65cf4a9eb3b2cea83b898766e87831d9 not found: ID does not exist" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.359802 4754 scope.go:117] "RemoveContainer" containerID="614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c" Jan 05 21:17:33 crc kubenswrapper[4754]: E0105 21:17:33.360176 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c\": container with ID starting with 614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c not found: ID does not exist" containerID="614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.360219 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c"} err="failed to get container status \"614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c\": rpc error: code = NotFound desc = could not find container \"614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c\": container with ID starting with 614a1aa9403a96690f165d65515e190917603df48ee9ed5f7c86a48ac3afc14c not found: ID does not exist" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.360275 4754 scope.go:117] "RemoveContainer" containerID="a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e" Jan 05 21:17:33 crc kubenswrapper[4754]: E0105 21:17:33.360584 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e\": container with ID starting with a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e not found: ID does not exist" containerID="a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.360618 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e"} err="failed to get container status \"a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e\": rpc error: code = NotFound desc = could not find container \"a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e\": container with ID starting with a2806fdeb18d896bd8bdeeb0c8177c6f35f075723f6e8a29f8888dc1828fd41e not found: ID does not exist" Jan 05 21:17:33 crc kubenswrapper[4754]: I0105 21:17:33.607188 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" path="/var/lib/kubelet/pods/ea8a40af-9f74-42d2-8d02-de6c5d776409/volumes" Jan 05 21:17:37 crc kubenswrapper[4754]: I0105 21:17:37.589191 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:17:37 crc kubenswrapper[4754]: E0105 21:17:37.590669 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:17:48 crc kubenswrapper[4754]: I0105 21:17:48.588979 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:17:48 crc kubenswrapper[4754]: E0105 21:17:48.589858 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:18:01 crc kubenswrapper[4754]: I0105 21:18:01.589097 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:18:01 crc kubenswrapper[4754]: E0105 21:18:01.590186 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:18:16 crc kubenswrapper[4754]: I0105 21:18:16.588463 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:18:16 crc kubenswrapper[4754]: E0105 21:18:16.589201 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:18:30 crc kubenswrapper[4754]: I0105 21:18:30.588552 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:18:30 crc kubenswrapper[4754]: E0105 21:18:30.589256 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:18:45 crc kubenswrapper[4754]: I0105 21:18:45.588745 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:18:45 crc kubenswrapper[4754]: E0105 21:18:45.591065 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:18:58 crc kubenswrapper[4754]: I0105 21:18:58.589789 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:18:58 crc kubenswrapper[4754]: E0105 21:18:58.590505 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.826562 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:08 crc kubenswrapper[4754]: E0105 21:19:08.828675 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="extract-content" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.828765 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="extract-content" Jan 05 21:19:08 crc kubenswrapper[4754]: E0105 21:19:08.828849 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="extract-utilities" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.828906 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="extract-utilities" Jan 05 21:19:08 crc kubenswrapper[4754]: E0105 21:19:08.828988 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="registry-server" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.829043 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="registry-server" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.829405 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8a40af-9f74-42d2-8d02-de6c5d776409" containerName="registry-server" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.831430 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.839077 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.894233 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47x9\" (UniqueName: \"kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.894316 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.894639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.997129 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47x9\" (UniqueName: \"kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.997499 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.997696 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.998010 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:08 crc kubenswrapper[4754]: I0105 21:19:08.998037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:09 crc kubenswrapper[4754]: I0105 21:19:09.456198 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47x9\" (UniqueName: \"kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9\") pod \"redhat-operators-lzmnp\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:09 crc kubenswrapper[4754]: I0105 21:19:09.459719 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:09 crc kubenswrapper[4754]: I0105 21:19:09.993197 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:10 crc kubenswrapper[4754]: I0105 21:19:10.720415 4754 generic.go:334] "Generic (PLEG): container finished" podID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerID="e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68" exitCode=0 Jan 05 21:19:10 crc kubenswrapper[4754]: I0105 21:19:10.720514 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerDied","Data":"e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68"} Jan 05 21:19:10 crc kubenswrapper[4754]: I0105 21:19:10.720767 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerStarted","Data":"dd6e268f17d5f9df62bcad62d6904db1bf113812e41f668ee34a37e093881d94"} Jan 05 21:19:10 crc kubenswrapper[4754]: I0105 21:19:10.722551 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:19:11 crc kubenswrapper[4754]: I0105 21:19:11.256135 4754 scope.go:117] "RemoveContainer" containerID="3fc9c0a1ed2ac47acc9846a7d45364727c872264d3eaccb8f60e64ded32b65dc" Jan 05 21:19:11 crc kubenswrapper[4754]: I0105 21:19:11.778516 4754 scope.go:117] "RemoveContainer" containerID="0631e98b0aadf0c946a4a2b181c1a206225a0a9a05898bbaff863712e64bfc22" Jan 05 21:19:11 crc kubenswrapper[4754]: I0105 21:19:11.843488 4754 scope.go:117] "RemoveContainer" containerID="49e6f3a0d16571f84b42ff166c58e46d1921fa18734a7ce3727376c5ab261073" Jan 05 21:19:12 crc kubenswrapper[4754]: I0105 21:19:12.744665 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerStarted","Data":"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13"} Jan 05 21:19:13 crc kubenswrapper[4754]: I0105 21:19:13.610932 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:19:13 crc kubenswrapper[4754]: E0105 21:19:13.611360 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:19:15 crc kubenswrapper[4754]: I0105 21:19:15.797720 4754 generic.go:334] "Generic (PLEG): container finished" podID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerID="2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13" exitCode=0 Jan 05 21:19:15 crc kubenswrapper[4754]: I0105 21:19:15.797795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerDied","Data":"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13"} Jan 05 21:19:16 crc kubenswrapper[4754]: I0105 21:19:16.826332 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerStarted","Data":"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1"} Jan 05 21:19:16 crc kubenswrapper[4754]: I0105 21:19:16.849531 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lzmnp" podStartSLOduration=3.364317459 podStartE2EDuration="8.84951472s" podCreationTimestamp="2026-01-05 21:19:08 +0000 UTC" firstStartedPulling="2026-01-05 21:19:10.722278367 +0000 UTC m=+4437.431462241" lastFinishedPulling="2026-01-05 21:19:16.207475628 +0000 UTC m=+4442.916659502" observedRunningTime="2026-01-05 21:19:16.843165983 +0000 UTC m=+4443.552349857" watchObservedRunningTime="2026-01-05 21:19:16.84951472 +0000 UTC m=+4443.558698594" Jan 05 21:19:19 crc kubenswrapper[4754]: I0105 21:19:19.460738 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:19 crc kubenswrapper[4754]: I0105 21:19:19.461218 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:20 crc kubenswrapper[4754]: I0105 21:19:20.520505 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lzmnp" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="registry-server" probeResult="failure" output=< Jan 05 21:19:20 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:19:20 crc kubenswrapper[4754]: > Jan 05 21:19:24 crc kubenswrapper[4754]: I0105 21:19:24.589536 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:19:24 crc kubenswrapper[4754]: E0105 21:19:24.591477 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:19:29 crc kubenswrapper[4754]: I0105 21:19:29.530214 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:29 crc kubenswrapper[4754]: I0105 21:19:29.602525 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:29 crc kubenswrapper[4754]: I0105 21:19:29.788097 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:30.999446 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lzmnp" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="registry-server" containerID="cri-o://bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1" gracePeriod=2 Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.679961 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.718378 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r47x9\" (UniqueName: \"kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9\") pod \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.718636 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content\") pod \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.718822 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities\") pod \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\" (UID: \"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b\") " Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.721463 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities" (OuterVolumeSpecName: "utilities") pod "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" (UID: "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.730318 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9" (OuterVolumeSpecName: "kube-api-access-r47x9") pod "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" (UID: "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b"). InnerVolumeSpecName "kube-api-access-r47x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.822728 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.822796 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r47x9\" (UniqueName: \"kubernetes.io/projected/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-kube-api-access-r47x9\") on node \"crc\" DevicePath \"\"" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.853285 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" (UID: "6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:19:31 crc kubenswrapper[4754]: I0105 21:19:31.926348 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.013877 4754 generic.go:334] "Generic (PLEG): container finished" podID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerID="bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1" exitCode=0 Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.013918 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerDied","Data":"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1"} Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.013955 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lzmnp" event={"ID":"6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b","Type":"ContainerDied","Data":"dd6e268f17d5f9df62bcad62d6904db1bf113812e41f668ee34a37e093881d94"} Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.013973 4754 scope.go:117] "RemoveContainer" containerID="bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.013983 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lzmnp" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.059124 4754 scope.go:117] "RemoveContainer" containerID="2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.069495 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.085859 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lzmnp"] Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.789191 4754 scope.go:117] "RemoveContainer" containerID="e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.854722 4754 scope.go:117] "RemoveContainer" containerID="bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1" Jan 05 21:19:32 crc kubenswrapper[4754]: E0105 21:19:32.855263 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1\": container with ID starting with bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1 not found: ID does not exist" containerID="bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.855326 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1"} err="failed to get container status \"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1\": rpc error: code = NotFound desc = could not find container \"bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1\": container with ID starting with bf5cf1645bc7a2c4b248f611339fa0290ef1b7d67bf0f96879935d87c2d452c1 not found: ID does not exist" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.855404 4754 scope.go:117] "RemoveContainer" containerID="2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13" Jan 05 21:19:32 crc kubenswrapper[4754]: E0105 21:19:32.855942 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13\": container with ID starting with 2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13 not found: ID does not exist" containerID="2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.855970 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13"} err="failed to get container status \"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13\": rpc error: code = NotFound desc = could not find container \"2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13\": container with ID starting with 2c459c97dc3c1d0450fb5f45f43787e8d2f37a09c8dc2df0dc506bc36322fc13 not found: ID does not exist" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.855990 4754 scope.go:117] "RemoveContainer" containerID="e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68" Jan 05 21:19:32 crc kubenswrapper[4754]: E0105 21:19:32.856710 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68\": container with ID starting with e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68 not found: ID does not exist" containerID="e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68" Jan 05 21:19:32 crc kubenswrapper[4754]: I0105 21:19:32.856752 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68"} err="failed to get container status \"e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68\": rpc error: code = NotFound desc = could not find container \"e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68\": container with ID starting with e67b6f05d91c39ad363870c4b7f302d4174a13e016d47673d8391ec89fd73d68 not found: ID does not exist" Jan 05 21:19:33 crc kubenswrapper[4754]: I0105 21:19:33.602854 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" path="/var/lib/kubelet/pods/6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b/volumes" Jan 05 21:19:39 crc kubenswrapper[4754]: I0105 21:19:39.589247 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:19:39 crc kubenswrapper[4754]: E0105 21:19:39.589996 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:19:54 crc kubenswrapper[4754]: I0105 21:19:54.589200 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:19:54 crc kubenswrapper[4754]: E0105 21:19:54.590176 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:20:08 crc kubenswrapper[4754]: I0105 21:20:08.589544 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:20:08 crc kubenswrapper[4754]: E0105 21:20:08.590403 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.623590 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:18 crc kubenswrapper[4754]: E0105 21:20:18.624800 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="extract-utilities" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.624818 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="extract-utilities" Jan 05 21:20:18 crc kubenswrapper[4754]: E0105 21:20:18.624841 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="registry-server" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.624851 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="registry-server" Jan 05 21:20:18 crc kubenswrapper[4754]: E0105 21:20:18.624895 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="extract-content" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.624905 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="extract-content" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.625186 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0ab09f-afc8-4b1e-a2a2-96c4f2322f6b" containerName="registry-server" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.629018 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.665944 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.738849 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.738961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.739176 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9r99\" (UniqueName: \"kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.842540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9r99\" (UniqueName: \"kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.842720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.842765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.843415 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:18 crc kubenswrapper[4754]: I0105 21:20:18.844168 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:19 crc kubenswrapper[4754]: I0105 21:20:19.361000 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9r99\" (UniqueName: \"kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99\") pod \"certified-operators-6th82\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:19 crc kubenswrapper[4754]: I0105 21:20:19.575857 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:19 crc kubenswrapper[4754]: I0105 21:20:19.589581 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:20:19 crc kubenswrapper[4754]: E0105 21:20:19.590037 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:20:20 crc kubenswrapper[4754]: W0105 21:20:20.096122 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80047deb_be50_448e_97fb_01e892539cce.slice/crio-12177b5c9ba974b6f3ec9c2f689f448d4bf91d2ad9af9e890c7dbe74b4e9e433 WatchSource:0}: Error finding container 12177b5c9ba974b6f3ec9c2f689f448d4bf91d2ad9af9e890c7dbe74b4e9e433: Status 404 returned error can't find the container with id 12177b5c9ba974b6f3ec9c2f689f448d4bf91d2ad9af9e890c7dbe74b4e9e433 Jan 05 21:20:20 crc kubenswrapper[4754]: I0105 21:20:20.098690 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:20 crc kubenswrapper[4754]: I0105 21:20:20.625789 4754 generic.go:334] "Generic (PLEG): container finished" podID="80047deb-be50-448e-97fb-01e892539cce" containerID="600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964" exitCode=0 Jan 05 21:20:20 crc kubenswrapper[4754]: I0105 21:20:20.625899 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerDied","Data":"600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964"} Jan 05 21:20:20 crc kubenswrapper[4754]: I0105 21:20:20.626118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerStarted","Data":"12177b5c9ba974b6f3ec9c2f689f448d4bf91d2ad9af9e890c7dbe74b4e9e433"} Jan 05 21:20:22 crc kubenswrapper[4754]: I0105 21:20:22.652258 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerStarted","Data":"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0"} Jan 05 21:20:23 crc kubenswrapper[4754]: I0105 21:20:23.669876 4754 generic.go:334] "Generic (PLEG): container finished" podID="80047deb-be50-448e-97fb-01e892539cce" containerID="426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0" exitCode=0 Jan 05 21:20:23 crc kubenswrapper[4754]: I0105 21:20:23.669978 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerDied","Data":"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0"} Jan 05 21:20:23 crc kubenswrapper[4754]: I0105 21:20:23.670271 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerStarted","Data":"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37"} Jan 05 21:20:23 crc kubenswrapper[4754]: I0105 21:20:23.696637 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6th82" podStartSLOduration=3.243920688 podStartE2EDuration="5.6966172s" podCreationTimestamp="2026-01-05 21:20:18 +0000 UTC" firstStartedPulling="2026-01-05 21:20:20.628520596 +0000 UTC m=+4507.337704470" lastFinishedPulling="2026-01-05 21:20:23.081217108 +0000 UTC m=+4509.790400982" observedRunningTime="2026-01-05 21:20:23.693687443 +0000 UTC m=+4510.402871347" watchObservedRunningTime="2026-01-05 21:20:23.6966172 +0000 UTC m=+4510.405801084" Jan 05 21:20:29 crc kubenswrapper[4754]: I0105 21:20:29.576834 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:29 crc kubenswrapper[4754]: I0105 21:20:29.577287 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:29 crc kubenswrapper[4754]: I0105 21:20:29.631902 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:29 crc kubenswrapper[4754]: I0105 21:20:29.806555 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:29 crc kubenswrapper[4754]: I0105 21:20:29.868795 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:31 crc kubenswrapper[4754]: I0105 21:20:31.592736 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:20:31 crc kubenswrapper[4754]: E0105 21:20:31.593249 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:20:31 crc kubenswrapper[4754]: I0105 21:20:31.776954 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6th82" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="registry-server" containerID="cri-o://bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37" gracePeriod=2 Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.337137 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.432059 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities\") pod \"80047deb-be50-448e-97fb-01e892539cce\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.432312 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9r99\" (UniqueName: \"kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99\") pod \"80047deb-be50-448e-97fb-01e892539cce\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.432505 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content\") pod \"80047deb-be50-448e-97fb-01e892539cce\" (UID: \"80047deb-be50-448e-97fb-01e892539cce\") " Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.437949 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities" (OuterVolumeSpecName: "utilities") pod "80047deb-be50-448e-97fb-01e892539cce" (UID: "80047deb-be50-448e-97fb-01e892539cce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.445988 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99" (OuterVolumeSpecName: "kube-api-access-b9r99") pod "80047deb-be50-448e-97fb-01e892539cce" (UID: "80047deb-be50-448e-97fb-01e892539cce"). InnerVolumeSpecName "kube-api-access-b9r99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.535277 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.535345 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9r99\" (UniqueName: \"kubernetes.io/projected/80047deb-be50-448e-97fb-01e892539cce-kube-api-access-b9r99\") on node \"crc\" DevicePath \"\"" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.724105 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80047deb-be50-448e-97fb-01e892539cce" (UID: "80047deb-be50-448e-97fb-01e892539cce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.740990 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80047deb-be50-448e-97fb-01e892539cce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.790104 4754 generic.go:334] "Generic (PLEG): container finished" podID="80047deb-be50-448e-97fb-01e892539cce" containerID="bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37" exitCode=0 Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.790158 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6th82" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.790174 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerDied","Data":"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37"} Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.790257 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6th82" event={"ID":"80047deb-be50-448e-97fb-01e892539cce","Type":"ContainerDied","Data":"12177b5c9ba974b6f3ec9c2f689f448d4bf91d2ad9af9e890c7dbe74b4e9e433"} Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.790322 4754 scope.go:117] "RemoveContainer" containerID="bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.847440 4754 scope.go:117] "RemoveContainer" containerID="426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.849361 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.880105 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6th82"] Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.949856 4754 scope.go:117] "RemoveContainer" containerID="600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.974748 4754 scope.go:117] "RemoveContainer" containerID="bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37" Jan 05 21:20:32 crc kubenswrapper[4754]: E0105 21:20:32.975127 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37\": container with ID starting with bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37 not found: ID does not exist" containerID="bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.975184 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37"} err="failed to get container status \"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37\": rpc error: code = NotFound desc = could not find container \"bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37\": container with ID starting with bfb770d1bc5919912c12a31aa62c375e72f0306e40470156fde7e58fbb2b0e37 not found: ID does not exist" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.975216 4754 scope.go:117] "RemoveContainer" containerID="426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0" Jan 05 21:20:32 crc kubenswrapper[4754]: E0105 21:20:32.975537 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0\": container with ID starting with 426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0 not found: ID does not exist" containerID="426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.975565 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0"} err="failed to get container status \"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0\": rpc error: code = NotFound desc = could not find container \"426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0\": container with ID starting with 426d254acb9a82cf716f2fba25ddf7da095ce9eccff46dc08c7a9c5de91e18b0 not found: ID does not exist" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.975584 4754 scope.go:117] "RemoveContainer" containerID="600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964" Jan 05 21:20:32 crc kubenswrapper[4754]: E0105 21:20:32.975871 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964\": container with ID starting with 600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964 not found: ID does not exist" containerID="600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964" Jan 05 21:20:32 crc kubenswrapper[4754]: I0105 21:20:32.975920 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964"} err="failed to get container status \"600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964\": rpc error: code = NotFound desc = could not find container \"600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964\": container with ID starting with 600f4c34ffb585292108967c828ed4eface17864672cc679f6f93926fa2ae964 not found: ID does not exist" Jan 05 21:20:33 crc kubenswrapper[4754]: I0105 21:20:33.601705 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80047deb-be50-448e-97fb-01e892539cce" path="/var/lib/kubelet/pods/80047deb-be50-448e-97fb-01e892539cce/volumes" Jan 05 21:20:45 crc kubenswrapper[4754]: I0105 21:20:45.590261 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:20:45 crc kubenswrapper[4754]: E0105 21:20:45.591342 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:20:56 crc kubenswrapper[4754]: I0105 21:20:56.588490 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:20:56 crc kubenswrapper[4754]: E0105 21:20:56.589378 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:21:08 crc kubenswrapper[4754]: I0105 21:21:08.589071 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:21:08 crc kubenswrapper[4754]: E0105 21:21:08.590036 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:21:23 crc kubenswrapper[4754]: I0105 21:21:23.598804 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:21:25 crc kubenswrapper[4754]: I0105 21:21:25.414923 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3"} Jan 05 21:23:48 crc kubenswrapper[4754]: I0105 21:23:48.109465 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:23:48 crc kubenswrapper[4754]: I0105 21:23:48.110117 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:24:18 crc kubenswrapper[4754]: I0105 21:24:18.109950 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:24:18 crc kubenswrapper[4754]: I0105 21:24:18.110489 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.109187 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.110147 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.110217 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.111814 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.111898 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3" gracePeriod=600 Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.575506 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3" exitCode=0 Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.575563 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3"} Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.575591 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4"} Jan 05 21:24:48 crc kubenswrapper[4754]: I0105 21:24:48.575621 4754 scope.go:117] "RemoveContainer" containerID="a61c14866b8fdfc5f6e8df44a114ab350642f6465c5b6778e2e88cc75b78379c" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.613976 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:14 crc kubenswrapper[4754]: E0105 21:25:14.615819 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="registry-server" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.615896 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="registry-server" Jan 05 21:25:14 crc kubenswrapper[4754]: E0105 21:25:14.615971 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="extract-content" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.616036 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="extract-content" Jan 05 21:25:14 crc kubenswrapper[4754]: E0105 21:25:14.616132 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="extract-utilities" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.616192 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="extract-utilities" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.616535 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="80047deb-be50-448e-97fb-01e892539cce" containerName="registry-server" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.619697 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.641324 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.696698 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.696835 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjpl\" (UniqueName: \"kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.696883 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.800959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.801080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjpl\" (UniqueName: \"kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.801123 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.801599 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.801739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:14 crc kubenswrapper[4754]: I0105 21:25:14.952239 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjpl\" (UniqueName: \"kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl\") pod \"community-operators-7ljls\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:15 crc kubenswrapper[4754]: I0105 21:25:15.246040 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:15 crc kubenswrapper[4754]: I0105 21:25:15.756100 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:15 crc kubenswrapper[4754]: I0105 21:25:15.906451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerStarted","Data":"17f9be2813054762d4ec86079cc34a15c1eddeaf4d5d49b5f4d5c280e359ba34"} Jan 05 21:25:16 crc kubenswrapper[4754]: I0105 21:25:16.918917 4754 generic.go:334] "Generic (PLEG): container finished" podID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerID="da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2" exitCode=0 Jan 05 21:25:16 crc kubenswrapper[4754]: I0105 21:25:16.918981 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerDied","Data":"da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2"} Jan 05 21:25:16 crc kubenswrapper[4754]: I0105 21:25:16.922314 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:25:18 crc kubenswrapper[4754]: I0105 21:25:18.943475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerStarted","Data":"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b"} Jan 05 21:25:21 crc kubenswrapper[4754]: I0105 21:25:21.979931 4754 generic.go:334] "Generic (PLEG): container finished" podID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerID="bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b" exitCode=0 Jan 05 21:25:21 crc kubenswrapper[4754]: I0105 21:25:21.980050 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerDied","Data":"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b"} Jan 05 21:25:24 crc kubenswrapper[4754]: I0105 21:25:24.006919 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerStarted","Data":"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376"} Jan 05 21:25:25 crc kubenswrapper[4754]: I0105 21:25:25.048937 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ljls" podStartSLOduration=4.906443866 podStartE2EDuration="11.048917701s" podCreationTimestamp="2026-01-05 21:25:14 +0000 UTC" firstStartedPulling="2026-01-05 21:25:16.922033601 +0000 UTC m=+4803.631217485" lastFinishedPulling="2026-01-05 21:25:23.064507456 +0000 UTC m=+4809.773691320" observedRunningTime="2026-01-05 21:25:25.044443404 +0000 UTC m=+4811.753627278" watchObservedRunningTime="2026-01-05 21:25:25.048917701 +0000 UTC m=+4811.758101575" Jan 05 21:25:25 crc kubenswrapper[4754]: I0105 21:25:25.247179 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:25 crc kubenswrapper[4754]: I0105 21:25:25.247235 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:26 crc kubenswrapper[4754]: I0105 21:25:26.593122 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7ljls" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="registry-server" probeResult="failure" output=< Jan 05 21:25:26 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:25:26 crc kubenswrapper[4754]: > Jan 05 21:25:35 crc kubenswrapper[4754]: I0105 21:25:35.307693 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:35 crc kubenswrapper[4754]: I0105 21:25:35.376213 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:36 crc kubenswrapper[4754]: I0105 21:25:36.493217 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.148831 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ljls" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="registry-server" containerID="cri-o://be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376" gracePeriod=2 Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.691987 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.833812 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjpl\" (UniqueName: \"kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl\") pod \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.834011 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content\") pod \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.834098 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities\") pod \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\" (UID: \"445c23dc-2032-4cc5-a69e-fe5e1b5190c3\") " Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.835204 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities" (OuterVolumeSpecName: "utilities") pod "445c23dc-2032-4cc5-a69e-fe5e1b5190c3" (UID: "445c23dc-2032-4cc5-a69e-fe5e1b5190c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.848519 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl" (OuterVolumeSpecName: "kube-api-access-cbjpl") pod "445c23dc-2032-4cc5-a69e-fe5e1b5190c3" (UID: "445c23dc-2032-4cc5-a69e-fe5e1b5190c3"). InnerVolumeSpecName "kube-api-access-cbjpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.917928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "445c23dc-2032-4cc5-a69e-fe5e1b5190c3" (UID: "445c23dc-2032-4cc5-a69e-fe5e1b5190c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.936530 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.936562 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:25:37 crc kubenswrapper[4754]: I0105 21:25:37.936572 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjpl\" (UniqueName: \"kubernetes.io/projected/445c23dc-2032-4cc5-a69e-fe5e1b5190c3-kube-api-access-cbjpl\") on node \"crc\" DevicePath \"\"" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.162590 4754 generic.go:334] "Generic (PLEG): container finished" podID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerID="be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376" exitCode=0 Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.162644 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerDied","Data":"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376"} Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.162662 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ljls" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.162682 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ljls" event={"ID":"445c23dc-2032-4cc5-a69e-fe5e1b5190c3","Type":"ContainerDied","Data":"17f9be2813054762d4ec86079cc34a15c1eddeaf4d5d49b5f4d5c280e359ba34"} Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.162703 4754 scope.go:117] "RemoveContainer" containerID="be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.204048 4754 scope.go:117] "RemoveContainer" containerID="bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.221696 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.235578 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ljls"] Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.236108 4754 scope.go:117] "RemoveContainer" containerID="da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.297772 4754 scope.go:117] "RemoveContainer" containerID="be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376" Jan 05 21:25:38 crc kubenswrapper[4754]: E0105 21:25:38.298190 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376\": container with ID starting with be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376 not found: ID does not exist" containerID="be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.298244 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376"} err="failed to get container status \"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376\": rpc error: code = NotFound desc = could not find container \"be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376\": container with ID starting with be3345b49df0b97010547ea79bc564ecfa46dbe83f1befda129f23d938a41376 not found: ID does not exist" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.298271 4754 scope.go:117] "RemoveContainer" containerID="bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b" Jan 05 21:25:38 crc kubenswrapper[4754]: E0105 21:25:38.298732 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b\": container with ID starting with bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b not found: ID does not exist" containerID="bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.298764 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b"} err="failed to get container status \"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b\": rpc error: code = NotFound desc = could not find container \"bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b\": container with ID starting with bfce3f3472ba12b70023c661a25afdbac2512a537b337ccf85d030d33c13687b not found: ID does not exist" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.298807 4754 scope.go:117] "RemoveContainer" containerID="da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2" Jan 05 21:25:38 crc kubenswrapper[4754]: E0105 21:25:38.299187 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2\": container with ID starting with da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2 not found: ID does not exist" containerID="da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2" Jan 05 21:25:38 crc kubenswrapper[4754]: I0105 21:25:38.299220 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2"} err="failed to get container status \"da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2\": rpc error: code = NotFound desc = could not find container \"da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2\": container with ID starting with da7d044e0b13feb08a8f356266e9ea409fa293984d67cd6716235f5d701717e2 not found: ID does not exist" Jan 05 21:25:39 crc kubenswrapper[4754]: I0105 21:25:39.602345 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" path="/var/lib/kubelet/pods/445c23dc-2032-4cc5-a69e-fe5e1b5190c3/volumes" Jan 05 21:26:48 crc kubenswrapper[4754]: I0105 21:26:48.109273 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:26:48 crc kubenswrapper[4754]: I0105 21:26:48.110718 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:27:18 crc kubenswrapper[4754]: I0105 21:27:18.109847 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:27:18 crc kubenswrapper[4754]: I0105 21:27:18.110405 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.108870 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.109474 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.109525 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.110486 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.110544 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" gracePeriod=600 Jan 05 21:27:48 crc kubenswrapper[4754]: E0105 21:27:48.239180 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:27:48 crc kubenswrapper[4754]: E0105 21:27:48.283130 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:27:48 crc kubenswrapper[4754]: E0105 21:27:48.283178 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.967228 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" exitCode=0 Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.967333 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4"} Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.967579 4754 scope.go:117] "RemoveContainer" containerID="5923693f5d3d768dc591e83cb5d782493d58b5e1da8c9d7bbe86eb21c454d8f3" Jan 05 21:27:48 crc kubenswrapper[4754]: I0105 21:27:48.969319 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:27:48 crc kubenswrapper[4754]: E0105 21:27:48.970093 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:03 crc kubenswrapper[4754]: I0105 21:28:03.612314 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:28:03 crc kubenswrapper[4754]: E0105 21:28:03.614073 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:17 crc kubenswrapper[4754]: I0105 21:28:17.589147 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:28:17 crc kubenswrapper[4754]: E0105 21:28:17.590656 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.029882 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 21:28:19 crc kubenswrapper[4754]: E0105 21:28:19.030860 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="extract-content" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.030882 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="extract-content" Jan 05 21:28:19 crc kubenswrapper[4754]: E0105 21:28:19.030918 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="registry-server" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.030927 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="registry-server" Jan 05 21:28:19 crc kubenswrapper[4754]: E0105 21:28:19.030954 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="extract-utilities" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.030962 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="extract-utilities" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.031247 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="445c23dc-2032-4cc5-a69e-fe5e1b5190c3" containerName="registry-server" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.032306 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.037022 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.037064 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.037245 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fj5nz" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.038597 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.042851 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.074361 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmq8\" (UniqueName: \"kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.074482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.074520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.074550 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.074885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.075004 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.075072 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.075119 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.075161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.176779 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.176890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.176991 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177075 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177147 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177380 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmq8\" (UniqueName: \"kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177483 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.177610 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.178549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.178600 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.179267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.180375 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.180729 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.185809 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.185916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.200857 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.201532 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmq8\" (UniqueName: \"kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.228639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.367407 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 21:28:19 crc kubenswrapper[4754]: I0105 21:28:19.866666 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 21:28:20 crc kubenswrapper[4754]: I0105 21:28:20.350621 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee","Type":"ContainerStarted","Data":"806b04bd235d28110ecfb905a95d6d82b7fa9edc859ac5ce22d0c353bbf03187"} Jan 05 21:28:31 crc kubenswrapper[4754]: I0105 21:28:31.589530 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:28:31 crc kubenswrapper[4754]: E0105 21:28:31.590261 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.095897 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.099474 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.109750 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.299807 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmg4\" (UniqueName: \"kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.300480 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.300759 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.403025 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmg4\" (UniqueName: \"kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.403413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.403627 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.403822 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.404143 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.435245 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmg4\" (UniqueName: \"kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4\") pod \"redhat-marketplace-8z5k8\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:43 crc kubenswrapper[4754]: I0105 21:28:43.734175 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:28:45 crc kubenswrapper[4754]: I0105 21:28:45.589857 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:28:45 crc kubenswrapper[4754]: E0105 21:28:45.590499 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:46 crc kubenswrapper[4754]: I0105 21:28:46.922373 4754 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.002410495s: [/var/lib/containers/storage/overlay/6948afdce33b562e67007bcc6a4f0d14dcb29512fedf7fc51391c98ab00497c7/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 05 21:28:46 crc kubenswrapper[4754]: I0105 21:28:46.931951 4754 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.012473351s: [/var/lib/containers/storage/overlay/de2aa040316b243926c7e8313b880bf8f658dbb7407653bf0a6cf7d625a6f0ac/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 05 21:28:53 crc kubenswrapper[4754]: I0105 21:28:53.211149 4754 trace.go:236] Trace[2143933416]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-n9kmf" (05-Jan-2026 21:28:52.109) (total time: 1101ms): Jan 05 21:28:53 crc kubenswrapper[4754]: Trace[2143933416]: [1.101028438s] [1.101028438s] END Jan 05 21:28:55 crc kubenswrapper[4754]: E0105 21:28:55.862329 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 05 21:28:55 crc kubenswrapper[4754]: E0105 21:28:55.863558 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lmq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:28:55 crc kubenswrapper[4754]: E0105 21:28:55.864797 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" Jan 05 21:28:56 crc kubenswrapper[4754]: I0105 21:28:56.385017 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:28:56 crc kubenswrapper[4754]: I0105 21:28:56.772860 4754 generic.go:334] "Generic (PLEG): container finished" podID="79386ec2-88e3-441a-a70b-8061463a9269" containerID="929ceed498aabc49ad2c936928c3aab8f9e3c072753044451af977a78a5ccd17" exitCode=0 Jan 05 21:28:56 crc kubenswrapper[4754]: I0105 21:28:56.773115 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerDied","Data":"929ceed498aabc49ad2c936928c3aab8f9e3c072753044451af977a78a5ccd17"} Jan 05 21:28:56 crc kubenswrapper[4754]: I0105 21:28:56.773215 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerStarted","Data":"882979c4cb62bdb5c838a379543762c5d7be74cd4e5ec78799c449ffef982ce7"} Jan 05 21:28:56 crc kubenswrapper[4754]: E0105 21:28:56.775672 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" Jan 05 21:28:57 crc kubenswrapper[4754]: I0105 21:28:57.588904 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:28:57 crc kubenswrapper[4754]: E0105 21:28:57.589674 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:28:57 crc kubenswrapper[4754]: I0105 21:28:57.784825 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerStarted","Data":"ae184e0276107ae2c07fa8cf105dae69a2a238b378c9d2134adb8983b580c8ea"} Jan 05 21:28:58 crc kubenswrapper[4754]: I0105 21:28:58.795771 4754 generic.go:334] "Generic (PLEG): container finished" podID="79386ec2-88e3-441a-a70b-8061463a9269" containerID="ae184e0276107ae2c07fa8cf105dae69a2a238b378c9d2134adb8983b580c8ea" exitCode=0 Jan 05 21:28:58 crc kubenswrapper[4754]: I0105 21:28:58.795861 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerDied","Data":"ae184e0276107ae2c07fa8cf105dae69a2a238b378c9d2134adb8983b580c8ea"} Jan 05 21:28:59 crc kubenswrapper[4754]: I0105 21:28:59.806660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerStarted","Data":"2a6d8d0922af9be2c80af5ed69ac231c066f930a813597361c5a847087e3ca65"} Jan 05 21:28:59 crc kubenswrapper[4754]: I0105 21:28:59.833788 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8z5k8" podStartSLOduration=14.180873236 podStartE2EDuration="16.833764657s" podCreationTimestamp="2026-01-05 21:28:43 +0000 UTC" firstStartedPulling="2026-01-05 21:28:56.774138308 +0000 UTC m=+5023.483322182" lastFinishedPulling="2026-01-05 21:28:59.427029729 +0000 UTC m=+5026.136213603" observedRunningTime="2026-01-05 21:28:59.822759297 +0000 UTC m=+5026.531943171" watchObservedRunningTime="2026-01-05 21:28:59.833764657 +0000 UTC m=+5026.542948531" Jan 05 21:29:03 crc kubenswrapper[4754]: I0105 21:29:03.735212 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:03 crc kubenswrapper[4754]: I0105 21:29:03.736384 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:04 crc kubenswrapper[4754]: I0105 21:29:04.095593 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:04 crc kubenswrapper[4754]: I0105 21:29:04.157034 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:04 crc kubenswrapper[4754]: I0105 21:29:04.333698 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:29:05 crc kubenswrapper[4754]: I0105 21:29:05.867993 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8z5k8" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="registry-server" containerID="cri-o://2a6d8d0922af9be2c80af5ed69ac231c066f930a813597361c5a847087e3ca65" gracePeriod=2 Jan 05 21:29:06 crc kubenswrapper[4754]: I0105 21:29:06.880468 4754 generic.go:334] "Generic (PLEG): container finished" podID="79386ec2-88e3-441a-a70b-8061463a9269" containerID="2a6d8d0922af9be2c80af5ed69ac231c066f930a813597361c5a847087e3ca65" exitCode=0 Jan 05 21:29:06 crc kubenswrapper[4754]: I0105 21:29:06.880538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerDied","Data":"2a6d8d0922af9be2c80af5ed69ac231c066f930a813597361c5a847087e3ca65"} Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.389992 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.522222 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities\") pod \"79386ec2-88e3-441a-a70b-8061463a9269\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.522392 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content\") pod \"79386ec2-88e3-441a-a70b-8061463a9269\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.522422 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvmg4\" (UniqueName: \"kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4\") pod \"79386ec2-88e3-441a-a70b-8061463a9269\" (UID: \"79386ec2-88e3-441a-a70b-8061463a9269\") " Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.523112 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities" (OuterVolumeSpecName: "utilities") pod "79386ec2-88e3-441a-a70b-8061463a9269" (UID: "79386ec2-88e3-441a-a70b-8061463a9269"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.528312 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4" (OuterVolumeSpecName: "kube-api-access-lvmg4") pod "79386ec2-88e3-441a-a70b-8061463a9269" (UID: "79386ec2-88e3-441a-a70b-8061463a9269"). InnerVolumeSpecName "kube-api-access-lvmg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.571062 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79386ec2-88e3-441a-a70b-8061463a9269" (UID: "79386ec2-88e3-441a-a70b-8061463a9269"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.626944 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.626999 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79386ec2-88e3-441a-a70b-8061463a9269-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.627025 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvmg4\" (UniqueName: \"kubernetes.io/projected/79386ec2-88e3-441a-a70b-8061463a9269-kube-api-access-lvmg4\") on node \"crc\" DevicePath \"\"" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.898151 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z5k8" event={"ID":"79386ec2-88e3-441a-a70b-8061463a9269","Type":"ContainerDied","Data":"882979c4cb62bdb5c838a379543762c5d7be74cd4e5ec78799c449ffef982ce7"} Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.898280 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z5k8" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.898477 4754 scope.go:117] "RemoveContainer" containerID="2a6d8d0922af9be2c80af5ed69ac231c066f930a813597361c5a847087e3ca65" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.929201 4754 scope.go:117] "RemoveContainer" containerID="ae184e0276107ae2c07fa8cf105dae69a2a238b378c9d2134adb8983b580c8ea" Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.938804 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.955471 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z5k8"] Jan 05 21:29:07 crc kubenswrapper[4754]: I0105 21:29:07.975873 4754 scope.go:117] "RemoveContainer" containerID="929ceed498aabc49ad2c936928c3aab8f9e3c072753044451af977a78a5ccd17" Jan 05 21:29:09 crc kubenswrapper[4754]: I0105 21:29:09.590740 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:29:09 crc kubenswrapper[4754]: E0105 21:29:09.593229 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:29:09 crc kubenswrapper[4754]: I0105 21:29:09.654843 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79386ec2-88e3-441a-a70b-8061463a9269" path="/var/lib/kubelet/pods/79386ec2-88e3-441a-a70b-8061463a9269/volumes" Jan 05 21:29:13 crc kubenswrapper[4754]: I0105 21:29:13.970197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee","Type":"ContainerStarted","Data":"7e2e5bc60b493750348ec1645e185ddcdae265a2a4ec9bef78123d591aba0048"} Jan 05 21:29:13 crc kubenswrapper[4754]: I0105 21:29:13.990999 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.855751956 podStartE2EDuration="56.990980897s" podCreationTimestamp="2026-01-05 21:28:17 +0000 UTC" firstStartedPulling="2026-01-05 21:28:19.87988209 +0000 UTC m=+4986.589065964" lastFinishedPulling="2026-01-05 21:29:12.015111021 +0000 UTC m=+5038.724294905" observedRunningTime="2026-01-05 21:29:13.985245536 +0000 UTC m=+5040.694429420" watchObservedRunningTime="2026-01-05 21:29:13.990980897 +0000 UTC m=+5040.700164771" Jan 05 21:29:24 crc kubenswrapper[4754]: I0105 21:29:24.589944 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:29:24 crc kubenswrapper[4754]: E0105 21:29:24.591077 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:29:35 crc kubenswrapper[4754]: I0105 21:29:35.591520 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:29:35 crc kubenswrapper[4754]: E0105 21:29:35.592908 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:29:50 crc kubenswrapper[4754]: I0105 21:29:50.588419 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:29:50 crc kubenswrapper[4754]: E0105 21:29:50.589149 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.292174 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2"] Jan 05 21:30:00 crc kubenswrapper[4754]: E0105 21:30:00.293245 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="registry-server" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.293535 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="registry-server" Jan 05 21:30:00 crc kubenswrapper[4754]: E0105 21:30:00.293686 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="extract-utilities" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.293695 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="extract-utilities" Jan 05 21:30:00 crc kubenswrapper[4754]: E0105 21:30:00.293707 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="extract-content" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.293712 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="extract-content" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.293935 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="79386ec2-88e3-441a-a70b-8061463a9269" containerName="registry-server" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.295524 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.367204 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.367352 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.431667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.431742 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fp79\" (UniqueName: \"kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.431826 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.451725 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2"] Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.533812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.533891 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fp79\" (UniqueName: \"kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.533988 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.537480 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.547441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.565500 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fp79\" (UniqueName: \"kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79\") pod \"collect-profiles-29460810-7gbx2\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:00 crc kubenswrapper[4754]: I0105 21:30:00.620875 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:01 crc kubenswrapper[4754]: I0105 21:30:01.482604 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2"] Jan 05 21:30:01 crc kubenswrapper[4754]: I0105 21:30:01.523476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" event={"ID":"880fd218-c64c-4169-918c-90e064c90c4f","Type":"ContainerStarted","Data":"ab8e68b9c720ee0ab4bb5172e117bb45f32e920ba13ede9c8d6629a3ed1886c2"} Jan 05 21:30:01 crc kubenswrapper[4754]: I0105 21:30:01.590267 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:30:01 crc kubenswrapper[4754]: E0105 21:30:01.590538 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:02 crc kubenswrapper[4754]: I0105 21:30:02.535588 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" event={"ID":"880fd218-c64c-4169-918c-90e064c90c4f","Type":"ContainerDied","Data":"f8f5d24f355d8d94c1acb6805d99a75408e75e8e12f49ca176bcc51384539117"} Jan 05 21:30:02 crc kubenswrapper[4754]: I0105 21:30:02.536159 4754 generic.go:334] "Generic (PLEG): container finished" podID="880fd218-c64c-4169-918c-90e064c90c4f" containerID="f8f5d24f355d8d94c1acb6805d99a75408e75e8e12f49ca176bcc51384539117" exitCode=0 Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.116858 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.192104 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume\") pod \"880fd218-c64c-4169-918c-90e064c90c4f\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.192259 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume\") pod \"880fd218-c64c-4169-918c-90e064c90c4f\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.192326 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fp79\" (UniqueName: \"kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79\") pod \"880fd218-c64c-4169-918c-90e064c90c4f\" (UID: \"880fd218-c64c-4169-918c-90e064c90c4f\") " Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.193394 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "880fd218-c64c-4169-918c-90e064c90c4f" (UID: "880fd218-c64c-4169-918c-90e064c90c4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.201769 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "880fd218-c64c-4169-918c-90e064c90c4f" (UID: "880fd218-c64c-4169-918c-90e064c90c4f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.206569 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79" (OuterVolumeSpecName: "kube-api-access-8fp79") pod "880fd218-c64c-4169-918c-90e064c90c4f" (UID: "880fd218-c64c-4169-918c-90e064c90c4f"). InnerVolumeSpecName "kube-api-access-8fp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.295648 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880fd218-c64c-4169-918c-90e064c90c4f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.295677 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/880fd218-c64c-4169-918c-90e064c90c4f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.295689 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fp79\" (UniqueName: \"kubernetes.io/projected/880fd218-c64c-4169-918c-90e064c90c4f-kube-api-access-8fp79\") on node \"crc\" DevicePath \"\"" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.564326 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" event={"ID":"880fd218-c64c-4169-918c-90e064c90c4f","Type":"ContainerDied","Data":"ab8e68b9c720ee0ab4bb5172e117bb45f32e920ba13ede9c8d6629a3ed1886c2"} Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.564398 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-7gbx2" Jan 05 21:30:04 crc kubenswrapper[4754]: I0105 21:30:04.564610 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab8e68b9c720ee0ab4bb5172e117bb45f32e920ba13ede9c8d6629a3ed1886c2" Jan 05 21:30:05 crc kubenswrapper[4754]: I0105 21:30:05.221391 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q"] Jan 05 21:30:05 crc kubenswrapper[4754]: I0105 21:30:05.236122 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460765-gxk2q"] Jan 05 21:30:05 crc kubenswrapper[4754]: I0105 21:30:05.626096 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589bc1bf-4794-43a6-b5a5-a005272ca784" path="/var/lib/kubelet/pods/589bc1bf-4794-43a6-b5a5-a005272ca784/volumes" Jan 05 21:30:12 crc kubenswrapper[4754]: I0105 21:30:12.353762 4754 scope.go:117] "RemoveContainer" containerID="f0c63e7bd8c035c7396b50e76a47af2188657d323ef8302cbd519808b7303a5e" Jan 05 21:30:16 crc kubenswrapper[4754]: I0105 21:30:16.589833 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:30:16 crc kubenswrapper[4754]: E0105 21:30:16.590785 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.498441 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:30:28 crc kubenswrapper[4754]: E0105 21:30:28.504490 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880fd218-c64c-4169-918c-90e064c90c4f" containerName="collect-profiles" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.504523 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="880fd218-c64c-4169-918c-90e064c90c4f" containerName="collect-profiles" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.511695 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="880fd218-c64c-4169-918c-90e064c90c4f" containerName="collect-profiles" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.514914 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.618514 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.618659 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.618699 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk7w\" (UniqueName: \"kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.653239 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.720658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.720810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.720861 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdk7w\" (UniqueName: \"kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.722595 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.722996 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.754522 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdk7w\" (UniqueName: \"kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w\") pod \"redhat-operators-97vwr\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:28 crc kubenswrapper[4754]: I0105 21:30:28.844108 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:29 crc kubenswrapper[4754]: I0105 21:30:29.743661 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:30:29 crc kubenswrapper[4754]: I0105 21:30:29.845014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerStarted","Data":"81cc2f8df6a7364b456e4d0cc2f3f6279ae190ff56ce1edab5bfc2a053c82b4b"} Jan 05 21:30:30 crc kubenswrapper[4754]: I0105 21:30:30.857251 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerDied","Data":"448599f161ae45687b65fa3c7b1f546ff90b45b86b979afb12a32a2047bc4619"} Jan 05 21:30:30 crc kubenswrapper[4754]: I0105 21:30:30.858777 4754 generic.go:334] "Generic (PLEG): container finished" podID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerID="448599f161ae45687b65fa3c7b1f546ff90b45b86b979afb12a32a2047bc4619" exitCode=0 Jan 05 21:30:30 crc kubenswrapper[4754]: I0105 21:30:30.862495 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:30:31 crc kubenswrapper[4754]: I0105 21:30:31.588342 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:30:31 crc kubenswrapper[4754]: E0105 21:30:31.588589 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:32 crc kubenswrapper[4754]: I0105 21:30:32.883186 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerStarted","Data":"e25192091c4354817118b99dfa3d7b6c7cc88b2d112a30d441572ea278662f20"} Jan 05 21:30:36 crc kubenswrapper[4754]: I0105 21:30:36.923734 4754 generic.go:334] "Generic (PLEG): container finished" podID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerID="e25192091c4354817118b99dfa3d7b6c7cc88b2d112a30d441572ea278662f20" exitCode=0 Jan 05 21:30:36 crc kubenswrapper[4754]: I0105 21:30:36.924130 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerDied","Data":"e25192091c4354817118b99dfa3d7b6c7cc88b2d112a30d441572ea278662f20"} Jan 05 21:30:37 crc kubenswrapper[4754]: I0105 21:30:37.948135 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerStarted","Data":"b9638ab4b0e78f836c6e31d0a72372b541881965028cc0dfa14cce28a95660d9"} Jan 05 21:30:37 crc kubenswrapper[4754]: I0105 21:30:37.980072 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97vwr" podStartSLOduration=3.462966203 podStartE2EDuration="9.979596041s" podCreationTimestamp="2026-01-05 21:30:28 +0000 UTC" firstStartedPulling="2026-01-05 21:30:30.861054402 +0000 UTC m=+5117.570238276" lastFinishedPulling="2026-01-05 21:30:37.37768424 +0000 UTC m=+5124.086868114" observedRunningTime="2026-01-05 21:30:37.964593455 +0000 UTC m=+5124.673777329" watchObservedRunningTime="2026-01-05 21:30:37.979596041 +0000 UTC m=+5124.688779905" Jan 05 21:30:38 crc kubenswrapper[4754]: I0105 21:30:38.844922 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:38 crc kubenswrapper[4754]: I0105 21:30:38.845250 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:30:39 crc kubenswrapper[4754]: I0105 21:30:39.903378 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:30:39 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:30:39 crc kubenswrapper[4754]: > Jan 05 21:30:42 crc kubenswrapper[4754]: I0105 21:30:42.589223 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:30:42 crc kubenswrapper[4754]: E0105 21:30:42.590202 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.218074 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.225934 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.319496 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.319643 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzc7\" (UniqueName: \"kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.319673 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.421893 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.422005 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nzc7\" (UniqueName: \"kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.422029 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.424055 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.424478 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.454041 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nzc7\" (UniqueName: \"kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7\") pod \"certified-operators-nvq9f\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.471846 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:30:45 crc kubenswrapper[4754]: I0105 21:30:45.555107 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:47 crc kubenswrapper[4754]: I0105 21:30:47.309067 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:30:48 crc kubenswrapper[4754]: I0105 21:30:48.072651 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerDied","Data":"0a703e1d37b48dc36b4d690112c379d4a259532a4500eadf327cd2d2e0c49f3a"} Jan 05 21:30:48 crc kubenswrapper[4754]: I0105 21:30:48.073199 4754 generic.go:334] "Generic (PLEG): container finished" podID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerID="0a703e1d37b48dc36b4d690112c379d4a259532a4500eadf327cd2d2e0c49f3a" exitCode=0 Jan 05 21:30:48 crc kubenswrapper[4754]: I0105 21:30:48.073373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerStarted","Data":"56e00cd0e613d7bf48c7f9e0cd4a7b9b288346f94bdec9cada4ace48cffff31e"} Jan 05 21:30:50 crc kubenswrapper[4754]: I0105 21:30:50.096265 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerStarted","Data":"4e3bdd514e39c2d6c38f3a7a9c8206aa6fe6155cf0018a0a03cd227771796eaa"} Jan 05 21:30:50 crc kubenswrapper[4754]: I0105 21:30:50.572277 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:30:50 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:30:50 crc kubenswrapper[4754]: > Jan 05 21:30:52 crc kubenswrapper[4754]: I0105 21:30:52.117582 4754 generic.go:334] "Generic (PLEG): container finished" podID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerID="4e3bdd514e39c2d6c38f3a7a9c8206aa6fe6155cf0018a0a03cd227771796eaa" exitCode=0 Jan 05 21:30:52 crc kubenswrapper[4754]: I0105 21:30:52.117685 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerDied","Data":"4e3bdd514e39c2d6c38f3a7a9c8206aa6fe6155cf0018a0a03cd227771796eaa"} Jan 05 21:30:54 crc kubenswrapper[4754]: I0105 21:30:54.166476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerStarted","Data":"9fc1fba5ba4125316ae3b86e5a2986d9f30cbd31123327fbea6b5b7ab946d37f"} Jan 05 21:30:54 crc kubenswrapper[4754]: I0105 21:30:54.245420 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nvq9f" podStartSLOduration=4.040635998 podStartE2EDuration="9.237237486s" podCreationTimestamp="2026-01-05 21:30:45 +0000 UTC" firstStartedPulling="2026-01-05 21:30:48.075444175 +0000 UTC m=+5134.784628059" lastFinishedPulling="2026-01-05 21:30:53.272045673 +0000 UTC m=+5139.981229547" observedRunningTime="2026-01-05 21:30:54.214359042 +0000 UTC m=+5140.923542916" watchObservedRunningTime="2026-01-05 21:30:54.237237486 +0000 UTC m=+5140.946421360" Jan 05 21:30:55 crc kubenswrapper[4754]: I0105 21:30:55.555862 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:55 crc kubenswrapper[4754]: I0105 21:30:55.556206 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:30:56 crc kubenswrapper[4754]: I0105 21:30:56.591750 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:30:56 crc kubenswrapper[4754]: E0105 21:30:56.592557 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:30:56 crc kubenswrapper[4754]: I0105 21:30:56.646197 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output=< Jan 05 21:30:56 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:30:56 crc kubenswrapper[4754]: > Jan 05 21:30:59 crc kubenswrapper[4754]: I0105 21:30:59.751959 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:30:59 crc kubenswrapper[4754]: I0105 21:30:59.752342 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:30:59 crc kubenswrapper[4754]: I0105 21:30:59.993689 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:30:59 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:30:59 crc kubenswrapper[4754]: > Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.140099 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.258597 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.340474 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podUID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.439664 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.647537 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.647542 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podUID="4b33baa5-64bb-4df7-ac22-925d718f9d60" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.647633 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.647707 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podUID="77f4456d-e6a6-466a-a74c-5276e4951784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.648159 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.753498 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.753502 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.959053 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.959615 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.959057 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.959689 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.978063 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:07 crc kubenswrapper[4754]: I0105 21:31:07.978369 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.073566 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.073631 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.605923 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.605920 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.606517 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.606606 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.612614 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:08 crc kubenswrapper[4754]: > Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.613101 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:08 crc kubenswrapper[4754]: > Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.613191 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:08 crc kubenswrapper[4754]: > Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.613167 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:08 crc kubenswrapper[4754]: > Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.613279 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:08 crc kubenswrapper[4754]: > Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.620178 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:31:08 crc kubenswrapper[4754]: E0105 21:31:08.623321 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.978001 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:08 crc kubenswrapper[4754]: I0105 21:31:08.978095 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.073616 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.073670 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.698786 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-tjtwh" podUID="3af58cc4-e753-4eb0-91c9-8b93516d665e" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:09 crc kubenswrapper[4754]: timeout: health rpc did not complete within 1s Jan 05 21:31:09 crc kubenswrapper[4754]: > Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.752272 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.752409 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.778429 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-kk2wq" podUID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:09 crc kubenswrapper[4754]: timeout: health rpc did not complete within 1s Jan 05 21:31:09 crc kubenswrapper[4754]: > Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.779073 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-kk2wq" podUID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:09 crc kubenswrapper[4754]: timeout: health rpc did not complete within 1s Jan 05 21:31:09 crc kubenswrapper[4754]: > Jan 05 21:31:09 crc kubenswrapper[4754]: I0105 21:31:09.914986 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:09 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:09 crc kubenswrapper[4754]: > Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.207079 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded" start-of-body= Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.207102 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.207156 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded" Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.207185 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.373558 4754 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9bgtx container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.10:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.373623 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" podUID="f834cfd3-fa41-4790-92c1-3b80d98241af" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.10:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.373564 4754 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9bgtx container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.10:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:10 crc kubenswrapper[4754]: I0105 21:31:10.373707 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-9bgtx" podUID="f834cfd3-fa41-4790-92c1-3b80d98241af" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.10:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.201248 4754 patch_prober.go:28] interesting pod/thanos-querier-869b668f44-6cplm container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.201605 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podUID="0597c753-3003-4d69-ad03-7e12215f7274" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.201283 4754 patch_prober.go:28] interesting pod/thanos-querier-869b668f44-6cplm container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.201689 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podUID="0597c753-3003-4d69-ad03-7e12215f7274" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.755974 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.756062 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.757763 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.768552 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:11 crc kubenswrapper[4754]: I0105 21:31:11.770741 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.740905 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" podUID="736d23ce-6bc0-439b-b1ff-86aad6363c2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.741432 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" podUID="736d23ce-6bc0-439b-b1ff-86aad6363c2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.752835 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.753273 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.877761 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ggkzj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.877832 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podUID="41d97351-8dc4-42de-bf00-4e8abbf24e0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.878003 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ggkzj container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.878094 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podUID="41d97351-8dc4-42de-bf00-4e8abbf24e0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.878173 4754 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hz59l container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.878200 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" podUID="3eb6b845-fab0-4359-87bd-17a33f9e78ca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.979878 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.979956 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.980382 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.980410 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.980460 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.980479 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.982632 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:12 crc kubenswrapper[4754]: I0105 21:31:12.982663 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.058958 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bxhwf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.059048 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" podUID="1968df11-e45d-47a0-a3bd-5dad31d14c8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.058944 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bxhwf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.059212 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" podUID="1968df11-e45d-47a0-a3bd-5dad31d14c8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.070878 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.070911 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.070970 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.071010 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.232640 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.232638 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.320918 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" podUID="fed06176-d7ad-4373-84df-204b6fdbf5cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.321010 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" podUID="fed06176-d7ad-4373-84df-204b6fdbf5cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.902795 4754 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c7d94bdc4-k9968 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.903050 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" podUID="163085a0-0b43-4d21-aefc-ec28ba9c6e3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.903112 4754 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c7d94bdc4-k9968 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:13 crc kubenswrapper[4754]: I0105 21:31:13.903127 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" podUID="163085a0-0b43-4d21-aefc-ec28ba9c6e3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.135052 4754 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.135108 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.264962 4754 patch_prober.go:28] interesting pod/metrics-server-9b97c7f7b-p9nz9 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.265018 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" podUID="c0ea00d6-9ff0-4a23-8481-369189bdf8f5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.646574 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" podUID="823e1e7d-9555-4324-a7aa-6add85d4d9f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.646694 4754 patch_prober.go:28] interesting pod/console-6f8c5cd5c9-2vcvq container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.136:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.646722 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6f8c5cd5c9-2vcvq" podUID="b9e545f1-ae96-4643-9412-10ada69b1b72" containerName="console" probeResult="failure" output="Get \"https://10.217.0.136:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.710523 4754 patch_prober.go:28] interesting pod/monitoring-plugin-68874c8d69-scmxl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.710582 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" podUID="f31e8d59-677f-4203-9104-bc930b45022e" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.940580 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" podUID="cdb2b1f2-eb13-466c-bc69-5cb4307eb695" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:14 crc kubenswrapper[4754]: I0105 21:31:14.940628 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" podUID="cdb2b1f2-eb13-466c-bc69-5cb4307eb695" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.013013 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.013414 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.013279 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.013538 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.020586 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5zkp6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.020701 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podUID="3e55c0bf-988b-4b2e-b44b-2343b48ff9f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.020614 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5zkp6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.020812 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podUID="3e55c0bf-988b-4b2e-b44b-2343b48ff9f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.044388 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sckmz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.044480 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podUID="f39a32ad-c337-4f04-b2ac-9a55729d7d4c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.044422 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sckmz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.044692 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podUID="f39a32ad-c337-4f04-b2ac-9a55729d7d4c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.307540 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.307616 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.307540 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.307693 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.317169 4754 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-ht8zq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.317205 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" podUID="eb9a96b4-392b-43f5-ad59-4e7cd4171f33" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:15 crc kubenswrapper[4754]: I0105 21:31:15.751962 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-qcjf8" podUID="957087f5-55fd-4a40-a01c-f96bf31dacf8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.061997 4754 patch_prober.go:28] interesting pod/thanos-querier-869b668f44-6cplm container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.063143 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podUID="0597c753-3003-4d69-ad03-7e12215f7274" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.109801 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.109818 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" podUID="1b800ad7-0ece-4722-9ad1-e20a2b5c7d42" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.110023 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.150501 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.193461 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.194188 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" podUID="1b800ad7-0ece-4722-9ad1-e20a2b5c7d42" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.197273 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.224703 4754 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.224778 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.279543 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" podUID="616c6f2a-f08e-450d-9ff1-cad7a75e25b2" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.280324 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" podUID="6836e11d-3e01-4752-ba84-0ba74829283f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.694234 4754 patch_prober.go:28] interesting pod/controller-manager-66dd56655b-c2cjs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.697883 4754 patch_prober.go:28] interesting pod/route-controller-manager-7bb885d9c5-txdbv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.697961 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" podUID="a7261372-b88c-4c55-bd6e-48fd2dd88614" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.694579 4754 patch_prober.go:28] interesting pod/route-controller-manager-7bb885d9c5-txdbv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.698104 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" podUID="a7261372-b88c-4c55-bd6e-48fd2dd88614" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.739924 4754 patch_prober.go:28] interesting pod/controller-manager-66dd56655b-c2cjs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.739996 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" podUID="26564ac3-15de-49b9-8ba9-d7075ba66cb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.741005 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" podUID="26564ac3-15de-49b9-8ba9-d7075ba66cb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.753197 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.753811 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.755789 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.755829 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-northd-0" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.758650 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.960551 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.960591 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.960620 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.960655 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.966004 4754 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-pd7zx container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:16 crc kubenswrapper[4754]: I0105 21:31:16.966085 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" podUID="4ebdfefc-77a3-4dca-a664-5468209724ec" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.150589 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" podUID="83a7e6b7-db24-4f2f-988d-ed13a27a06af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.150589 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" podUID="2aeeabff-cc4c-49b1-a895-c21ae9d43e3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.232582 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podUID="1f664632-a6e1-491d-b0cf-be1717a6d28b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.232628 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" podUID="2aeeabff-cc4c-49b1-a895-c21ae9d43e3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.317549 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" podUID="83a7e6b7-db24-4f2f-988d-ed13a27a06af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.320901 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.487492 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.487836 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podUID="1f664632-a6e1-491d-b0cf-be1717a6d28b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.489685 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.498052 4754 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-qfwrj container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.503984 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" podUID="d8b747f5-71f0-48b5-aae8-375ef3d8ef00" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.518527 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.571521 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.736579 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="ffdd4590-0498-4083-996d-75035d8fba10" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.736637 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ffdd4590-0498-4083-996d-75035d8fba10" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.736576 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.736825 4754 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-swdgc container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.736948 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" podUID="dbe68fed-2285-4e97-9c3d-d9fb903dc682" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.754590 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.754611 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.769662 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-northd" containerStatusID={"Type":"cri-o","ID":"709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6"} pod="openstack/ovn-northd-0" containerMessage="Container ovn-northd failed liveness probe, will be restarted" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.769899 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" containerID="cri-o://709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" gracePeriod=30 Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.819493 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.819809 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.903622 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="652366fc-9032-455e-9e13-b71fd3ff76e3" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.903652 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podUID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.903737 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="652366fc-9032-455e-9e13-b71fd3ff76e3" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.986493 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podUID="f289c3c4-ad02-4022-ac22-239133f6c1ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:17 crc kubenswrapper[4754]: I0105 21:31:17.986533 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podUID="db5f9ab8-2422-439c-a857-23f918cfa919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.068500 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" podUID="82f028d6-51a7-461a-ae7d-cd2da5f47afb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.150483 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podUID="f289c3c4-ad02-4022-ac22-239133f6c1ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.150518 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.234871 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" podUID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.399490 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.399600 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.399982 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400330 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400356 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400393 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podUID="983e4f4a-fe90-4460-ad97-b6955a888933" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400427 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podUID="983e4f4a-fe90-4460-ad97-b6955a888933" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400537 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400561 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400609 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400686 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podUID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400742 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podUID="db5f9ab8-2422-439c-a857-23f918cfa919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400765 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400793 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.401430 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" podUID="82f028d6-51a7-461a-ae7d-cd2da5f47afb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.401482 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.401501 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402073 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402129 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podUID="4b33baa5-64bb-4df7-ac22-925d718f9d60" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402179 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402274 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402313 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podUID="77f4456d-e6a6-466a-a74c-5276e4951784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402346 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podUID="77f4456d-e6a6-466a-a74c-5276e4951784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.402380 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.400557 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434224 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434312 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434274 4754 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434396 4754 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.80:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434423 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="85b1b29c-72d4-41ff-8185-f1cd738be7db" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.80:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.434421 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8970d80f-9277-46ca-ba45-c09e3362c3e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.589270 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.589383 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.589457 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:18 crc kubenswrapper[4754]: I0105 21:31:18.589486 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.753274 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.753972 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.755341 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.755707 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.755968 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.758200 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-zv7sc" podUID="3f2e87f2-d218-4699-81bb-6156676884d3" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.764061 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-zv7sc" podUID="3f2e87f2-d218-4699-81bb-6156676884d3" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.824633 4754 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-cv74d container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.824679 4754 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-cv74d container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.824720 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" podUID="3db77f8c-170e-42a1-a6a5-ae5188f0c8d2" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.824755 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" podUID="3db77f8c-170e-42a1-a6a5-ae5188f0c8d2" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.884565 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.884648 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.884733 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:19 crc kubenswrapper[4754]: I0105 21:31:19.884758 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.057825 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.057951 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.059487 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.059585 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.270498 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-27lzx container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.270516 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-27lzx container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.270561 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.270584 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:20 crc kubenswrapper[4754]: E0105 21:31:20.617834 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:20 crc kubenswrapper[4754]: E0105 21:31:20.620876 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:20 crc kubenswrapper[4754]: E0105 21:31:20.622262 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:20 crc kubenswrapper[4754]: E0105 21:31:20.622385 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" Jan 05 21:31:20 crc kubenswrapper[4754]: I0105 21:31:20.756941 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.061198 4754 patch_prober.go:28] interesting pod/thanos-querier-869b668f44-6cplm container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.061280 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podUID="0597c753-3003-4d69-ad03-7e12215f7274" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.235568 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" podUID="616c6f2a-f08e-450d-9ff1-cad7a75e25b2" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.235598 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-k59mk" podUID="616c6f2a-f08e-450d-9ff1-cad7a75e25b2" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.507263 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.511676 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-kk2wq" podUID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.512883 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.588420 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.589727 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-tjtwh" podUID="3af58cc4-e753-4eb0-91c9-8b93516d665e" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.591589 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-tjtwh" podUID="3af58cc4-e753-4eb0-91c9-8b93516d665e" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.592828 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.594744 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-kk2wq" podUID="dabb102c-20ff-4424-95d7-d26f22f594f5" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.604046 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:21 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:21 crc kubenswrapper[4754]: > Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.751938 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:21 crc kubenswrapper[4754]: I0105 21:31:21.752468 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:22 crc kubenswrapper[4754]: E0105 21:31:22.564538 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.613818 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.613863 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.697466 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" podUID="736d23ce-6bc0-439b-b1ff-86aad6363c2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.753817 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.754381 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.754562 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.756504 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.756591 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.758036 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"0a547e9ff025f817135f2b3637e5774233ea414d82ed97c7e8144c6754b93c55"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.758488 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerName="ceilometer-central-agent" containerID="cri-o://0a547e9ff025f817135f2b3637e5774233ea414d82ed97c7e8144c6754b93c55" gracePeriod=30 Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.854593 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" event={"ID":"4d09717a-7822-46ae-8192-62aa7305304b","Type":"ContainerDied","Data":"aa3b9c76564db737b1a5def2e85c6632cf4433e4a91eb2dec392320b0bf2f098"} Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.854504 4754 generic.go:334] "Generic (PLEG): container finished" podID="4d09717a-7822-46ae-8192-62aa7305304b" containerID="aa3b9c76564db737b1a5def2e85c6632cf4433e4a91eb2dec392320b0bf2f098" exitCode=1 Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.860906 4754 scope.go:117] "RemoveContainer" containerID="aa3b9c76564db737b1a5def2e85c6632cf4433e4a91eb2dec392320b0bf2f098" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876529 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ggkzj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876618 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podUID="41d97351-8dc4-42de-bf00-4e8abbf24e0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876541 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ggkzj container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876686 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-ggkzj" podUID="41d97351-8dc4-42de-bf00-4e8abbf24e0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876841 4754 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hz59l container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876856 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876915 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876850 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.876965 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.877001 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.877027 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.877753 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hz59l" podUID="3eb6b845-fab0-4359-87bd-17a33f9e78ca" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.878445 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"d5ae0da10b6f9aa06b9e2593943f28d9156c1950ac1ad2334e9bf55adda75217"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.878495 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" containerID="cri-o://d5ae0da10b6f9aa06b9e2593943f28d9156c1950ac1ad2334e9bf55adda75217" gracePeriod=30 Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.978530 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.978604 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.978606 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:22 crc kubenswrapper[4754]: I0105 21:31:22.978649 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.020540 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.020606 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.020538 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-q6l9h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.020667 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q6l9h" podUID="138601a3-19fd-44e5-b817-49b048fe3e88" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.058880 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bxhwf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.059329 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" podUID="1968df11-e45d-47a0-a3bd-5dad31d14c8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.059199 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bxhwf container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.059574 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-bxhwf" podUID="1968df11-e45d-47a0-a3bd-5dad31d14c8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.071042 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.071062 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.071265 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.071351 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.189559 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.252594 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" podUID="fed06176-d7ad-4373-84df-204b6fdbf5cf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.605197 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:31:23 crc kubenswrapper[4754]: E0105 21:31:23.607155 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.862539 4754 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5c7d94bdc4-k9968 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.862888 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" podUID="163085a0-0b43-4d21-aefc-ec28ba9c6e3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.877916 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:23 crc kubenswrapper[4754]: I0105 21:31:23.877977 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.136042 4754 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.136099 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.265044 4754 patch_prober.go:28] interesting pod/metrics-server-9b97c7f7b-p9nz9 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.265137 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" podUID="c0ea00d6-9ff0-4a23-8481-369189bdf8f5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.265206 4754 patch_prober.go:28] interesting pod/metrics-server-9b97c7f7b-p9nz9 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.265219 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-9b97c7f7b-p9nz9" podUID="c0ea00d6-9ff0-4a23-8481-369189bdf8f5" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.75:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.647697 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" podUID="823e1e7d-9555-4324-a7aa-6add85d4d9f3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.647608 4754 patch_prober.go:28] interesting pod/console-6f8c5cd5c9-2vcvq container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.136:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.648347 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6f8c5cd5c9-2vcvq" podUID="b9e545f1-ae96-4643-9412-10ada69b1b72" containerName="console" probeResult="failure" output="Get \"https://10.217.0.136:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.709526 4754 patch_prober.go:28] interesting pod/monitoring-plugin-68874c8d69-scmxl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.709598 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-68874c8d69-scmxl" podUID="f31e8d59-677f-4203-9104-bc930b45022e" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.877927 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd134f60-e97c-487b-be9e-c356c7478c21/ovn-northd/0.log" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.877985 4754 generic.go:334] "Generic (PLEG): container finished" podID="bd134f60-e97c-487b-be9e-c356c7478c21" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" exitCode=139 Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.878015 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd134f60-e97c-487b-be9e-c356c7478c21","Type":"ContainerDied","Data":"709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6"} Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.940665 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" podUID="cdb2b1f2-eb13-466c-bc69-5cb4307eb695" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.940989 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-75f4999fb9-2ss2h" podUID="cdb2b1f2-eb13-466c-bc69-5cb4307eb695" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.941034 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 21:31:24 crc kubenswrapper[4754]: I0105 21:31:24.941193 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.018969 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.019008 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lcnw9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.019075 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.019109 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lcnw9" podUID="1c252969-10b1-47d5-afef-b58cb4895766" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.020151 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5zkp6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.020180 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podUID="3e55c0bf-988b-4b2e-b44b-2343b48ff9f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.020243 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5zkp6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.020262 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zkp6" podUID="3e55c0bf-988b-4b2e-b44b-2343b48ff9f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.044741 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sckmz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.044798 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podUID="f39a32ad-c337-4f04-b2ac-9a55729d7d4c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.044811 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sckmz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.044855 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sckmz" podUID="f39a32ad-c337-4f04-b2ac-9a55729d7d4c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.138468 4754 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-mzj9s container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.138482 4754 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-mzj9s container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.138620 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" podUID="6706045f-88d8-4afd-867a-d0560b8fb9e0" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.138535 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzj9s" podUID="6706045f-88d8-4afd-867a-d0560b8fb9e0" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.308465 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.308757 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.308515 4754 patch_prober.go:28] interesting pod/router-default-5444994796-vnrxg container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.308823 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-vnrxg" podUID="16642695-5006-4f7d-829c-becc9345dd6e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.318129 4754 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-ht8zq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.318164 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-ht8zq" podUID="eb9a96b4-392b-43f5-ad59-4e7cd4171f33" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.87:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.584188 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.584252 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:31:25 crc kubenswrapper[4754]: E0105 21:31:25.594918 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6 is running failed: container process not found" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:25 crc kubenswrapper[4754]: E0105 21:31:25.598516 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6 is running failed: container process not found" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:25 crc kubenswrapper[4754]: E0105 21:31:25.603150 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6 is running failed: container process not found" containerID="709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 05 21:31:25 crc kubenswrapper[4754]: E0105 21:31:25.603231 4754 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 709626ef98220cf0bc8eb27d6bcbf3f4c1469b194ad468a0f28360d3279bc5f6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="bd134f60-e97c-487b-be9e-c356c7478c21" containerName="ovn-northd" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.752578 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-qcjf8" podUID="957087f5-55fd-4a40-a01c-f96bf31dacf8" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.895565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" event={"ID":"4d09717a-7822-46ae-8192-62aa7305304b","Type":"ContainerStarted","Data":"70fc28636be70f788f74d97678f268657257efb1fa9334d25a1d5130544f97f3"} Jan 05 21:31:25 crc kubenswrapper[4754]: I0105 21:31:25.896041 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.027507 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.060559 4754 patch_prober.go:28] interesting pod/thanos-querier-869b668f44-6cplm container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.060617 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-869b668f44-6cplm" podUID="0597c753-3003-4d69-ad03-7e12215f7274" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.111462 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.111692 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gkzx4" podUID="c584577b-8f80-4506-9fa5-3f8e9df40f02" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.111510 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" podUID="1b800ad7-0ece-4722-9ad1-e20a2b5c7d42" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.193561 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.193650 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-f7vfs" podUID="1b800ad7-0ece-4722-9ad1-e20a2b5c7d42" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.276527 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" podUID="6836e11d-3e01-4752-ba84-0ba74829283f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.276527 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-b62cb" podUID="2f15c30e-3828-471c-8e71-3573735397a1" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.95:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.276766 4754 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.276816 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.276877 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" podUID="6836e11d-3e01-4752-ba84-0ba74829283f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.599545 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cbd467bc4-dt25q" podUID="2516e3b9-fbb3-4341-ba30-837bc79225aa" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.599679 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7cbd467bc4-dt25q" podUID="2516e3b9-fbb3-4341-ba30-837bc79225aa" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.600196 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7cbd467bc4-dt25q" podUID="2516e3b9-fbb3-4341-ba30-837bc79225aa" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.600440 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cbd467bc4-dt25q" podUID="2516e3b9-fbb3-4341-ba30-837bc79225aa" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.202:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.631225 4754 patch_prober.go:28] interesting pod/controller-manager-66dd56655b-c2cjs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.631282 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" podUID="26564ac3-15de-49b9-8ba9-d7075ba66cb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.631369 4754 patch_prober.go:28] interesting pod/controller-manager-66dd56655b-c2cjs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.631389 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-66dd56655b-c2cjs" podUID="26564ac3-15de-49b9-8ba9-d7075ba66cb5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.640630 4754 patch_prober.go:28] interesting pod/route-controller-manager-7bb885d9c5-txdbv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.640674 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" podUID="a7261372-b88c-4c55-bd6e-48fd2dd88614" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.640696 4754 patch_prober.go:28] interesting pod/route-controller-manager-7bb885d9c5-txdbv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.640768 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7bb885d9c5-txdbv" podUID="a7261372-b88c-4c55-bd6e-48fd2dd88614" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.753905 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.966451 4754 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-pd7zx container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:26 crc kubenswrapper[4754]: I0105 21:31:26.966522 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-pd7zx" podUID="4ebdfefc-77a3-4dca-a664-5468209724ec" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.066578 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-h2wmj" podUID="2aeeabff-cc4c-49b1-a895-c21ae9d43e3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.096338 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8t5gl" podUID="83a7e6b7-db24-4f2f-988d-ed13a27a06af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.096351 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-6lf69" podUID="1f664632-a6e1-491d-b0cf-be1717a6d28b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.138555 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.138700 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.179578 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-dqks7" podUID="f289c3c4-ad02-4022-ac22-239133f6c1ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.179788 4754 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-qfwrj container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.179812 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-qfwrj" podUID="d8b747f5-71f0-48b5-aae8-375ef3d8ef00" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.299093 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.308528 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.308587 4754 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="16c003911fd82847b695b228c89924fa27dc474cc73f3e4467a06b6f94d59934" exitCode=1 Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.308831 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"16c003911fd82847b695b228c89924fa27dc474cc73f3e4467a06b6f94d59934"} Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.308896 4754 scope.go:117] "RemoveContainer" containerID="6c242d9981c62ac494864a1509489965653e71656102883010a94a74b34d0360" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.309991 4754 scope.go:117] "RemoveContainer" containerID="16c003911fd82847b695b228c89924fa27dc474cc73f3e4467a06b6f94d59934" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.328506 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.328781 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" podUID="983e4f4a-fe90-4460-ad97-b6955a888933" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.369877 4754 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-swdgc container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.370234 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-swdgc" podUID="dbe68fed-2285-4e97-9c3d-d9fb903dc682" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.370130 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ffdd4590-0498-4083-996d-75035d8fba10" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.370163 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="ffdd4590-0498-4083-996d-75035d8fba10" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.8:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.386442 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.386481 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.386549 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.386579 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.386461 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" podUID="83cc207a-0725-4775-b2f7-93c71985ba1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.427662 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podUID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.427739 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="652366fc-9032-455e-9e13-b71fd3ff76e3" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.427662 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="652366fc-9032-455e-9e13-b71fd3ff76e3" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.167:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.427800 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.468575 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jngdn" podUID="db5f9ab8-2422-439c-a857-23f918cfa919" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.468575 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-568985c78-st6w9" podUID="91877573-8199-4055-988f-96bd6469af4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.512434 4754 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-ql695 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.515269 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ql695" podUID="296f71e4-2a83-467e-b40a-87b1e40330b9" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.551530 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-688488f44f-62gsk" podUID="4b33baa5-64bb-4df7-ac22-925d718f9d60" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.592904 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podUID="77f4456d-e6a6-466a-a74c-5276e4951784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.619341 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.677572 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.677767 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-z6whp" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.678573 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.678666 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-z6whp" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.677561 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.678907 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.682015 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"c7f4da7936623719fb57b754d729c787677f1fbaa197d4097d4fe5f4ce366b03"} pod="metallb-system/speaker-z6whp" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.682587 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" containerID="cri-o://c7f4da7936623719fb57b754d729c787677f1fbaa197d4097d4fe5f4ce366b03" gracePeriod=2 Jan 05 21:31:27 crc kubenswrapper[4754]: I0105 21:31:27.754679 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.057798 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.058173 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.059482 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.059532 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.071613 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.071676 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.093896 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.180479 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" podUID="f1a3a024-3293-4e7b-b1cd-c93c914c190e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.194188 4754 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.194314 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f8103c6-3d68-4568-ae3b-89f606aa116a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.210538 4754 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.210662 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8970d80f-9277-46ca-ba45-c09e3362c3e2" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.76:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.321744 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.326658 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd134f60-e97c-487b-be9e-c356c7478c21/ovn-northd/0.log" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.326749 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd134f60-e97c-487b-be9e-c356c7478c21","Type":"ContainerStarted","Data":"b03dee929cc480bb562cbc93bcc14a75e01cfaa21f70ce2425b33a2b469719bc"} Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.327031 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.330103 4754 generic.go:334] "Generic (PLEG): container finished" podID="163085a0-0b43-4d21-aefc-ec28ba9c6e3f" containerID="10e115373b34e09eeb566f287e46ef6ec8653509471c731713d9402b9a1de224" exitCode=1 Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.330214 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" event={"ID":"163085a0-0b43-4d21-aefc-ec28ba9c6e3f","Type":"ContainerDied","Data":"10e115373b34e09eeb566f287e46ef6ec8653509471c731713d9402b9a1de224"} Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.331248 4754 scope.go:117] "RemoveContainer" containerID="10e115373b34e09eeb566f287e46ef6ec8653509471c731713d9402b9a1de224" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.375541 4754 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.375692 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="85b1b29c-72d4-41ff-8185-f1cd738be7db" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.428511 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" podUID="0cd346d8-d14a-404e-b2fa-16fc917e6886" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.469664 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" podUID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674489 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674588 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674602 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674671 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674548 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.674829 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.675961 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"b644439300225a379828b8227d8178a1729687043266ca303a6bf8041c572bde"} pod="openshift-authentication/oauth-openshift-58444664d6-99b25" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.761535 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" podUID="a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:28 crc kubenswrapper[4754]: I0105 21:31:28.761583 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-z6whp" podUID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.342831 4754 generic.go:334] "Generic (PLEG): container finished" podID="823e1e7d-9555-4324-a7aa-6add85d4d9f3" containerID="413cbf61be6bacef7d9f7b10f4cde04fce84347e4feaffff105035ed04057174" exitCode=1 Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.342875 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" event={"ID":"823e1e7d-9555-4324-a7aa-6add85d4d9f3","Type":"ContainerDied","Data":"413cbf61be6bacef7d9f7b10f4cde04fce84347e4feaffff105035ed04057174"} Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.344671 4754 scope.go:117] "RemoveContainer" containerID="413cbf61be6bacef7d9f7b10f4cde04fce84347e4feaffff105035ed04057174" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.345884 4754 generic.go:334] "Generic (PLEG): container finished" podID="dd93e799-6591-41d5-988a-18cc6d8c836d" containerID="b25c05ad220d914d300af45ef6a721af4089545b621dbfb2be14dc7924c61c8b" exitCode=1 Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.347724 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" event={"ID":"dd93e799-6591-41d5-988a-18cc6d8c836d","Type":"ContainerDied","Data":"b25c05ad220d914d300af45ef6a721af4089545b621dbfb2be14dc7924c61c8b"} Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.348379 4754 scope.go:117] "RemoveContainer" containerID="b25c05ad220d914d300af45ef6a721af4089545b621dbfb2be14dc7924c61c8b" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.679507 4754 patch_prober.go:28] interesting pod/oauth-openshift-58444664d6-99b25 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.679578 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.752798 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="99d5dfe0-666a-44dc-b93c-5c92ef395bcc" containerName="prometheus" probeResult="failure" output="command timed out" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.752803 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.825198 4754 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-cv74d container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.825264 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" podUID="3db77f8c-170e-42a1-a6a5-ae5188f0c8d2" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.825344 4754 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-cv74d container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:29 crc kubenswrapper[4754]: I0105 21:31:29.825362 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-cv74d" podUID="3db77f8c-170e-42a1-a6a5-ae5188f0c8d2" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.60:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139567 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139622 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139664 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139709 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139759 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.139825 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.140957 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"cb6fdab77c6e4bf3f91368d83ea8c587a0648cf6c260160c98eff086b1dbc950"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.140994 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" containerID="cri-o://cb6fdab77c6e4bf3f91368d83ea8c587a0648cf6c260160c98eff086b1dbc950" gracePeriod=30 Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.268510 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-27lzx container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.268881 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.268744 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-27lzx container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.268943 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-27lzx" podUID="96d98210-f390-413e-8fe4-96ec610d2071" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.359599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" event={"ID":"dd93e799-6591-41d5-988a-18cc6d8c836d","Type":"ContainerStarted","Data":"37d88f436a0725c5686e009e9a165aaa9bcdedcee1703f838fe66f8b6af6a818"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.359794 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.361331 4754 generic.go:334] "Generic (PLEG): container finished" podID="77f4456d-e6a6-466a-a74c-5276e4951784" containerID="3fe1bfe9f98fa4537183d205755d99138021e7c45fe493ece4d2ab1654c7ee6d" exitCode=1 Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.361350 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" event={"ID":"77f4456d-e6a6-466a-a74c-5276e4951784","Type":"ContainerDied","Data":"3fe1bfe9f98fa4537183d205755d99138021e7c45fe493ece4d2ab1654c7ee6d"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.361784 4754 scope.go:117] "RemoveContainer" containerID="3fe1bfe9f98fa4537183d205755d99138021e7c45fe493ece4d2ab1654c7ee6d" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.363514 4754 generic.go:334] "Generic (PLEG): container finished" podID="8df02427-4d10-41bb-9798-82cf7b8bca3e" containerID="f3871a2568fdfc53c29662aa73b73d4e3929b580d806eece1d907e492dace215" exitCode=1 Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.363597 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" event={"ID":"8df02427-4d10-41bb-9798-82cf7b8bca3e","Type":"ContainerDied","Data":"f3871a2568fdfc53c29662aa73b73d4e3929b580d806eece1d907e492dace215"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.364329 4754 scope.go:117] "RemoveContainer" containerID="f3871a2568fdfc53c29662aa73b73d4e3929b580d806eece1d907e492dace215" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.365955 4754 generic.go:334] "Generic (PLEG): container finished" podID="fed06176-d7ad-4373-84df-204b6fdbf5cf" containerID="536cba47c873bc028f6648e962165f695729ebcbe96069f72858d8758fcd080b" exitCode=1 Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.366022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" event={"ID":"fed06176-d7ad-4373-84df-204b6fdbf5cf","Type":"ContainerDied","Data":"536cba47c873bc028f6648e962165f695729ebcbe96069f72858d8758fcd080b"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.366945 4754 scope.go:117] "RemoveContainer" containerID="536cba47c873bc028f6648e962165f695729ebcbe96069f72858d8758fcd080b" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.370281 4754 generic.go:334] "Generic (PLEG): container finished" podID="8118d4b3-34f3-49b4-ab29-1a2b17adacfb" containerID="c7f4da7936623719fb57b754d729c787677f1fbaa197d4097d4fe5f4ce366b03" exitCode=137 Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.370330 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6whp" event={"ID":"8118d4b3-34f3-49b4-ab29-1a2b17adacfb","Type":"ContainerDied","Data":"c7f4da7936623719fb57b754d729c787677f1fbaa197d4097d4fe5f4ce366b03"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.373110 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.374026 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf78458959568b19171a1bf1612e6edc30abfd963bd2184528c08f5d50c896bd"} Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.388686 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.390923 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.696622 4754 trace.go:236] Trace[1535362107]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (05-Jan-2026 21:31:20.391) (total time: 10303ms): Jan 05 21:31:30 crc kubenswrapper[4754]: Trace[1535362107]: [10.30325157s] [10.30325157s] END Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.700253 4754 trace.go:236] Trace[674868526]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (05-Jan-2026 21:31:24.407) (total time: 6292ms): Jan 05 21:31:30 crc kubenswrapper[4754]: Trace[674868526]: [6.292389407s] [6.292389407s] END Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.756940 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.757401 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-zv7sc" podUID="3f2e87f2-d218-4699-81bb-6156676884d3" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.757638 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-zv7sc" podUID="3f2e87f2-d218-4699-81bb-6156676884d3" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.877979 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 21:31:30 crc kubenswrapper[4754]: I0105 21:31:30.878062 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.173482 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" podUID="03f29d3f-9221-484d-aa70-8889d57f7de1" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.397479 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" event={"ID":"163085a0-0b43-4d21-aefc-ec28ba9c6e3f","Type":"ContainerStarted","Data":"6b28b3902967a89e15d9582a4f97fff3076c04709853303b968f6689327a8981"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.398186 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.407666 4754 generic.go:334] "Generic (PLEG): container finished" podID="736d23ce-6bc0-439b-b1ff-86aad6363c2a" containerID="4a99c4667c62c4980348e28c5bf9ac8f03a478df5265ad77f3cf067934916134" exitCode=1 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.407751 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" event={"ID":"736d23ce-6bc0-439b-b1ff-86aad6363c2a","Type":"ContainerDied","Data":"4a99c4667c62c4980348e28c5bf9ac8f03a478df5265ad77f3cf067934916134"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.408900 4754 scope.go:117] "RemoveContainer" containerID="4a99c4667c62c4980348e28c5bf9ac8f03a478df5265ad77f3cf067934916134" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.414594 4754 generic.go:334] "Generic (PLEG): container finished" podID="6836e11d-3e01-4752-ba84-0ba74829283f" containerID="4babf78291ed020b6cef72a90c1fc8aa540735309e1447e38eaf31727c9173da" exitCode=1 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.414656 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" event={"ID":"6836e11d-3e01-4752-ba84-0ba74829283f","Type":"ContainerDied","Data":"4babf78291ed020b6cef72a90c1fc8aa540735309e1447e38eaf31727c9173da"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.416839 4754 scope.go:117] "RemoveContainer" containerID="4babf78291ed020b6cef72a90c1fc8aa540735309e1447e38eaf31727c9173da" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.427177 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" event={"ID":"8df02427-4d10-41bb-9798-82cf7b8bca3e","Type":"ContainerStarted","Data":"9df1066efdcb42585acde390608154707256af73357949671dbf184ab4e9cbf8"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.427562 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.431544 4754 generic.go:334] "Generic (PLEG): container finished" podID="29d3a96c-7dee-4a63-945c-3fef7cdcc7e7" containerID="63537bfe51d55a5b54eb18944d0ef8460a8a5940f54d6d4aa803ec1032e52ae7" exitCode=1 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.431659 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" event={"ID":"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7","Type":"ContainerDied","Data":"63537bfe51d55a5b54eb18944d0ef8460a8a5940f54d6d4aa803ec1032e52ae7"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.432693 4754 scope.go:117] "RemoveContainer" containerID="63537bfe51d55a5b54eb18944d0ef8460a8a5940f54d6d4aa803ec1032e52ae7" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.436233 4754 generic.go:334] "Generic (PLEG): container finished" podID="92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f" containerID="d8011ec926523491c5e9b357df426940fc1e414b8809462800a37598c11515e4" exitCode=1 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.436283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" event={"ID":"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f","Type":"ContainerDied","Data":"d8011ec926523491c5e9b357df426940fc1e414b8809462800a37598c11515e4"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.438939 4754 scope.go:117] "RemoveContainer" containerID="d8011ec926523491c5e9b357df426940fc1e414b8809462800a37598c11515e4" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.448105 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" event={"ID":"77f4456d-e6a6-466a-a74c-5276e4951784","Type":"ContainerStarted","Data":"11bfeff337750d4a08138dee31920879dff011f466f8d8a29a520acb071c1f37"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.450873 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.452618 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" event={"ID":"823e1e7d-9555-4324-a7aa-6add85d4d9f3","Type":"ContainerStarted","Data":"4555d239d81f76445bceb3039e0152a88a2f3574e2a452967ee813e8449954ec"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.453223 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.456217 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" event={"ID":"fed06176-d7ad-4373-84df-204b6fdbf5cf","Type":"ContainerStarted","Data":"8a6ec8e0d9ba5c5b891fbcc661436e5f7cc2c899145c02b30b86861de56ecf58"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.456410 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.461227 4754 generic.go:334] "Generic (PLEG): container finished" podID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerID="cb6fdab77c6e4bf3f91368d83ea8c587a0648cf6c260160c98eff086b1dbc950" exitCode=0 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.461751 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" event={"ID":"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913","Type":"ContainerDied","Data":"cb6fdab77c6e4bf3f91368d83ea8c587a0648cf6c260160c98eff086b1dbc950"} Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.461813 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.463952 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.463987 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.656976 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.657039 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.754628 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.754994 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.756404 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.756468 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.756739 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.756792 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.757714 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9"} pod="openshift-marketplace/community-operators-n9kmf" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.757741 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" containerID="cri-o://9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" gracePeriod=30 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.757883 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.757908 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.758433 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b"} pod="openshift-marketplace/certified-operators-j8r5p" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.758453 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" containerID="cri-o://da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" gracePeriod=30 Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.759783 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.759862 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.759893 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output="command timed out" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.759941 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:31:31 crc kubenswrapper[4754]: I0105 21:31:31.839564 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.148960 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.148998 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.149348 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.150081 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" podUID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.177881 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-njtd4" podUID="03f29d3f-9221-484d-aa70-8889d57f7de1" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.395842 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:32 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:32 crc kubenswrapper[4754]: > Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.475906 4754 generic.go:334] "Generic (PLEG): container finished" podID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerID="d5ae0da10b6f9aa06b9e2593943f28d9156c1950ac1ad2334e9bf55adda75217" exitCode=0 Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.476011 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" event={"ID":"9f282a77-59b3-4b2c-8c62-a2526d2a77b5","Type":"ContainerDied","Data":"d5ae0da10b6f9aa06b9e2593943f28d9156c1950ac1ad2334e9bf55adda75217"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.478125 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" event={"ID":"6836e11d-3e01-4752-ba84-0ba74829283f","Type":"ContainerStarted","Data":"fb282aefaeef911118404bd9af2bec735966bb82e21c32cf7c8b6d8a818800f7"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.479327 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.492806 4754 generic.go:334] "Generic (PLEG): container finished" podID="983e4f4a-fe90-4460-ad97-b6955a888933" containerID="8f097002c9f1cb4f5e7f91dd654d1b087fd4eaf0917a1610b2b121066aeb040d" exitCode=1 Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.492906 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" event={"ID":"983e4f4a-fe90-4460-ad97-b6955a888933","Type":"ContainerDied","Data":"8f097002c9f1cb4f5e7f91dd654d1b087fd4eaf0917a1610b2b121066aeb040d"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.515714 4754 scope.go:117] "RemoveContainer" containerID="8f097002c9f1cb4f5e7f91dd654d1b087fd4eaf0917a1610b2b121066aeb040d" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.517878 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6whp" event={"ID":"8118d4b3-34f3-49b4-ab29-1a2b17adacfb","Type":"ContainerStarted","Data":"45893e187d19547dfb3f5831ce067f663e68baa1be20b49460507c72aac6f958"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.518098 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-z6whp" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.520775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" event={"ID":"c0d4f5db-f43e-4812-8e5b-5f1efcdcb913","Type":"ContainerStarted","Data":"c148b97277f8031021d44260aa1b18999262e94c1828a677d6398d9fdb266eb8"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.521332 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.521367 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.524570 4754 generic.go:334] "Generic (PLEG): container finished" podID="82f028d6-51a7-461a-ae7d-cd2da5f47afb" containerID="24624dc865f23b6f5bc7df4177f1d92b1f1c3263b21130db4224d697e905c692" exitCode=1 Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.524648 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" event={"ID":"82f028d6-51a7-461a-ae7d-cd2da5f47afb","Type":"ContainerDied","Data":"24624dc865f23b6f5bc7df4177f1d92b1f1c3263b21130db4224d697e905c692"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.525704 4754 scope.go:117] "RemoveContainer" containerID="24624dc865f23b6f5bc7df4177f1d92b1f1c3263b21130db4224d697e905c692" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.529796 4754 generic.go:334] "Generic (PLEG): container finished" podID="6d71c5c9-f75a-475f-880c-d234d43ad7d9" containerID="c0166e8e12d4ab5b79585b0bd871ca8bc3a4da94b180c243681eb616a5f8ac5f" exitCode=1 Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.529877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" event={"ID":"6d71c5c9-f75a-475f-880c-d234d43ad7d9","Type":"ContainerDied","Data":"c0166e8e12d4ab5b79585b0bd871ca8bc3a4da94b180c243681eb616a5f8ac5f"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.530692 4754 scope.go:117] "RemoveContainer" containerID="c0166e8e12d4ab5b79585b0bd871ca8bc3a4da94b180c243681eb616a5f8ac5f" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.534075 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" event={"ID":"92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f","Type":"ContainerStarted","Data":"95f52df148983dafc6666136db4458270af108fda7ab0e3b05f766ae01b87285"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.535191 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.537997 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" event={"ID":"736d23ce-6bc0-439b-b1ff-86aad6363c2a","Type":"ContainerStarted","Data":"b0077190b629858b39c7f1fc5144e2346fcdd983838b115696c9aee650e73e27"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.538201 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.540969 4754 generic.go:334] "Generic (PLEG): container finished" podID="83cc207a-0725-4775-b2f7-93c71985ba1e" containerID="a065810edd7a850ba5d2c09ae4c596683936333d62fd02aaf78f68cdcb8aea68" exitCode=1 Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.541114 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" event={"ID":"83cc207a-0725-4775-b2f7-93c71985ba1e","Type":"ContainerDied","Data":"a065810edd7a850ba5d2c09ae4c596683936333d62fd02aaf78f68cdcb8aea68"} Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.544567 4754 scope.go:117] "RemoveContainer" containerID="a065810edd7a850ba5d2c09ae4c596683936333d62fd02aaf78f68cdcb8aea68" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.756127 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" probeResult="failure" output="command timed out" Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.978174 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-k6trg container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:32 crc kubenswrapper[4754]: I0105 21:31:32.978560 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-k6trg" podUID="cef8ee76-7c6e-420e-8c38-a7ad816cd513" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.070385 4754 patch_prober.go:28] interesting pod/logging-loki-gateway-586cd7f6-c4rps container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.070460 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-586cd7f6-c4rps" podUID="85a07def-c26c-49aa-ae32-c7772e9ebecc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.553234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" event={"ID":"9f282a77-59b3-4b2c-8c62-a2526d2a77b5","Type":"ContainerStarted","Data":"b7d2b250f185f6bc4e468824880731125567a5b1795ea65db8886555c904a88b"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.555665 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.558575 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" event={"ID":"83cc207a-0725-4775-b2f7-93c71985ba1e","Type":"ContainerStarted","Data":"3b554a908f9ee311ef45a6d4cfdc2e4a73320952385b686ba77ecd8cffc8be38"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.559657 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.560322 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.560676 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.560726 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.561860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfsjd" event={"ID":"29d3a96c-7dee-4a63-945c-3fef7cdcc7e7","Type":"ContainerStarted","Data":"efad7d95a73a9af6afa6e1642459b21937ddcbca2a3ee12c1890e2139a79507b"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.564016 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" event={"ID":"983e4f4a-fe90-4460-ad97-b6955a888933","Type":"ContainerStarted","Data":"1fa487dc83979f5bdb469eb1162d444855e6b6e1013f261b58ee981951e535b9"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.564587 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.568248 4754 generic.go:334] "Generic (PLEG): container finished" podID="a078215d-9fb5-413f-b542-ca5b3c6fb296" containerID="cb5f702e97b13ecd5d425cbda1798976a5fac445b3b8187b8b546fc975f0c619" exitCode=1 Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.568326 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" event={"ID":"a078215d-9fb5-413f-b542-ca5b3c6fb296","Type":"ContainerDied","Data":"cb5f702e97b13ecd5d425cbda1798976a5fac445b3b8187b8b546fc975f0c619"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.569107 4754 scope.go:117] "RemoveContainer" containerID="cb5f702e97b13ecd5d425cbda1798976a5fac445b3b8187b8b546fc975f0c619" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.571169 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" event={"ID":"82f028d6-51a7-461a-ae7d-cd2da5f47afb","Type":"ContainerStarted","Data":"cbf4b0c691b426d2e67486320f1d1ada3387e3bee846354087a813c14c1dcc02"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.571786 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.575381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" event={"ID":"6d71c5c9-f75a-475f-880c-d234d43ad7d9","Type":"ContainerStarted","Data":"add27d33df06561ee898bfffcd6b4a4cdba4a3532fb24a4272c249b2b13ae53e"} Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.575855 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.577219 4754 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-679tw container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 05 21:31:33 crc kubenswrapper[4754]: I0105 21:31:33.577254 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" podUID="c0d4f5db-f43e-4812-8e5b-5f1efcdcb913" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 05 21:31:34 crc kubenswrapper[4754]: I0105 21:31:34.587710 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" event={"ID":"a078215d-9fb5-413f-b542-ca5b3c6fb296","Type":"ContainerStarted","Data":"2da0eb2a4a75dfe63637b482d70fc4e3e1883c6b3e8ad313e9d317eddf467b97"} Jan 05 21:31:34 crc kubenswrapper[4754]: I0105 21:31:34.588422 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:31:34 crc kubenswrapper[4754]: E0105 21:31:34.591670 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:31:35 crc kubenswrapper[4754]: I0105 21:31:35.416518 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" containerID="cri-o://b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948" gracePeriod=15 Jan 05 21:31:35 crc kubenswrapper[4754]: I0105 21:31:35.422499 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" containerID="cri-o://345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632" gracePeriod=27 Jan 05 21:31:35 crc kubenswrapper[4754]: I0105 21:31:35.604967 4754 generic.go:334] "Generic (PLEG): container finished" podID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerID="0a547e9ff025f817135f2b3637e5774233ea414d82ed97c7e8144c6754b93c55" exitCode=0 Jan 05 21:31:35 crc kubenswrapper[4754]: I0105 21:31:35.605066 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerDied","Data":"0a547e9ff025f817135f2b3637e5774233ea414d82ed97c7e8144c6754b93c55"} Jan 05 21:31:35 crc kubenswrapper[4754]: E0105 21:31:35.994332 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:35 crc kubenswrapper[4754]: E0105 21:31:35.995938 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.000426 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.000502 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.098770 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-7b945" Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.171917 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.173679 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.175828 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:36 crc kubenswrapper[4754]: E0105 21:31:36.175897 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.202065 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" podUID="dd93e799-6591-41d5-988a-18cc6d8c836d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": dial tcp 10.217.0.107:8081: connect: connection refused" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.248787 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-swt4w" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.259942 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" podUID="8df02427-4d10-41bb-9798-82cf7b8bca3e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": dial tcp 10.217.0.109:8081: connect: connection refused" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.447356 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" podUID="4d09717a-7822-46ae-8192-62aa7305304b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.463129 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" podUID="77f4456d-e6a6-466a-a74c-5276e4951784" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": dial tcp 10.217.0.120:8081: connect: connection refused" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.494588 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-hmwzd" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.621779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68c442e4-0c24-4351-84b7-ccda8b09ea2c","Type":"ContainerStarted","Data":"593f38167183ee4afea5f6355af8d535c6c4c0f86710b743333ef13d7233f873"} Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.756970 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="68c442e4-0c24-4351-84b7-ccda8b09ea2c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.876922 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.876955 4754 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pjzck container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.876973 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 21:31:36 crc kubenswrapper[4754]: I0105 21:31:36.877007 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" podUID="9f282a77-59b3-4b2c-8c62-a2526d2a77b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 05 21:31:37 crc kubenswrapper[4754]: I0105 21:31:37.096959 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:37 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:37 crc kubenswrapper[4754]: > Jan 05 21:31:37 crc kubenswrapper[4754]: I0105 21:31:37.285663 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:31:37 crc kubenswrapper[4754]: I0105 21:31:37.892033 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 21:31:38 crc kubenswrapper[4754]: I0105 21:31:38.276815 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:38 crc kubenswrapper[4754]: E0105 21:31:38.797644 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:38 crc kubenswrapper[4754]: E0105 21:31:38.805637 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:38 crc kubenswrapper[4754]: E0105 21:31:38.833695 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:38 crc kubenswrapper[4754]: E0105 21:31:38.833766 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerName="galera" Jan 05 21:31:39 crc kubenswrapper[4754]: I0105 21:31:39.087034 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-679tw" Jan 05 21:31:39 crc kubenswrapper[4754]: I0105 21:31:39.708628 4754 generic.go:334] "Generic (PLEG): container finished" podID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerID="345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632" exitCode=0 Jan 05 21:31:39 crc kubenswrapper[4754]: I0105 21:31:39.708975 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerDied","Data":"345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632"} Jan 05 21:31:39 crc kubenswrapper[4754]: I0105 21:31:39.881681 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pjzck" Jan 05 21:31:39 crc kubenswrapper[4754]: I0105 21:31:39.935990 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:39 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:39 crc kubenswrapper[4754]: > Jan 05 21:31:40 crc kubenswrapper[4754]: E0105 21:31:40.388188 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632 is running failed: container process not found" containerID="345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:40 crc kubenswrapper[4754]: E0105 21:31:40.389080 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632 is running failed: container process not found" containerID="345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:40 crc kubenswrapper[4754]: E0105 21:31:40.389474 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632 is running failed: container process not found" containerID="345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 05 21:31:40 crc kubenswrapper[4754]: E0105 21:31:40.389562 4754 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 345954f2094fb20a27a1f7d9f7b8c9f4e6fa5c852358fded16bcd8fc04337632 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="81aaed05-ed65-4414-bf4f-7e5e4cf9966a" containerName="galera" Jan 05 21:31:40 crc kubenswrapper[4754]: I0105 21:31:40.571969 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:40 crc kubenswrapper[4754]: I0105 21:31:40.727871 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 21:31:40 crc kubenswrapper[4754]: I0105 21:31:40.729157 4754 generic.go:334] "Generic (PLEG): container finished" podID="9af784f4-79c9-4422-bc62-a2c49c9bb7cc" containerID="b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948" exitCode=0 Jan 05 21:31:40 crc kubenswrapper[4754]: I0105 21:31:40.729231 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerDied","Data":"b083adb065d09c27807217ad4c0fa1c6865f4a1d18e6a5ab19df124734c22948"} Jan 05 21:31:41 crc kubenswrapper[4754]: I0105 21:31:41.680079 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-2g75d" Jan 05 21:31:41 crc kubenswrapper[4754]: I0105 21:31:41.740936 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81aaed05-ed65-4414-bf4f-7e5e4cf9966a","Type":"ContainerStarted","Data":"1f61085fc2b8f95af4eb734b13a69ead1c29c083e56ef054bb53d898cd3f9804"} Jan 05 21:31:41 crc kubenswrapper[4754]: I0105 21:31:41.743941 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af784f4-79c9-4422-bc62-a2c49c9bb7cc","Type":"ContainerStarted","Data":"d4b6dd79983cb0220aa92dae2453546451ebba304b4049715919490dc06bfbe5"} Jan 05 21:31:42 crc kubenswrapper[4754]: I0105 21:31:42.148438 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 21:31:42 crc kubenswrapper[4754]: I0105 21:31:42.155133 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74f9c55c9c-f9rnv" Jan 05 21:31:42 crc kubenswrapper[4754]: I0105 21:31:42.217331 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd72b9pp" Jan 05 21:31:42 crc kubenswrapper[4754]: I0105 21:31:42.874517 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c7d94bdc4-k9968" Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.570807 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.570901 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.572219 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"3a89b83ea5be92ca5e9f43c30abe5d876dae5deb4fefeae5d114122a6d756d84"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.572405 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" containerID="cri-o://3a89b83ea5be92ca5e9f43c30abe5d876dae5deb4fefeae5d114122a6d756d84" gracePeriod=30 Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.579734 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:31:43 crc kubenswrapper[4754]: I0105 21:31:43.583666 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:31:44 crc kubenswrapper[4754]: I0105 21:31:44.783001 4754 generic.go:334] "Generic (PLEG): container finished" podID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerID="3a89b83ea5be92ca5e9f43c30abe5d876dae5deb4fefeae5d114122a6d756d84" exitCode=0 Jan 05 21:31:44 crc kubenswrapper[4754]: I0105 21:31:44.783153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d71b2d3-db78-4e52-af70-e5108d39502b","Type":"ContainerDied","Data":"3a89b83ea5be92ca5e9f43c30abe5d876dae5deb4fefeae5d114122a6d756d84"} Jan 05 21:31:45 crc kubenswrapper[4754]: I0105 21:31:45.175003 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7bb596d4b9-gbtbf" Jan 05 21:31:45 crc kubenswrapper[4754]: I0105 21:31:45.797719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2d71b2d3-db78-4e52-af70-e5108d39502b","Type":"ContainerStarted","Data":"478d87c67266ea1ef4c652667366da80d1cf7a178e68a232a8821c6bad6d7b4b"} Jan 05 21:31:45 crc kubenswrapper[4754]: E0105 21:31:45.991365 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:45 crc kubenswrapper[4754]: E0105 21:31:45.993848 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:45 crc kubenswrapper[4754]: E0105 21:31:45.995257 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:45 crc kubenswrapper[4754]: E0105 21:31:45.995354 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.128892 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-lrhk6" Jan 05 21:31:46 crc kubenswrapper[4754]: E0105 21:31:46.176766 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:46 crc kubenswrapper[4754]: E0105 21:31:46.193750 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:46 crc kubenswrapper[4754]: E0105 21:31:46.195736 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" cmd=["grpc_health_probe","-addr=:50051"] Jan 05 21:31:46 crc kubenswrapper[4754]: E0105 21:31:46.195819 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.204914 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-brnwf" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.274134 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-69j2s" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.274514 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-gj5xk" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.281950 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-hdh6j" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.301271 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-h56wh" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.374116 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-jcntk" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.465023 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-s4n44" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.474478 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-944qj" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.481328 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-z6whp" Jan 05 21:31:46 crc kubenswrapper[4754]: I0105 21:31:46.613223 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:46 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:46 crc kubenswrapper[4754]: > Jan 05 21:31:47 crc kubenswrapper[4754]: I0105 21:31:47.554656 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 21:31:48 crc kubenswrapper[4754]: I0105 21:31:48.590191 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:31:48 crc kubenswrapper[4754]: E0105 21:31:48.590635 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:31:48 crc kubenswrapper[4754]: I0105 21:31:48.781667 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 21:31:48 crc kubenswrapper[4754]: I0105 21:31:48.781727 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 21:31:49 crc kubenswrapper[4754]: I0105 21:31:49.845516 4754 generic.go:334] "Generic (PLEG): container finished" podID="5f585df1-958f-4733-a720-2d37460d2b12" containerID="da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b" exitCode=0 Jan 05 21:31:49 crc kubenswrapper[4754]: I0105 21:31:49.845600 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerDied","Data":"da91bb74f81bebc2d171dcc204fb957f66159f0ed6709642769f45790857148b"} Jan 05 21:31:49 crc kubenswrapper[4754]: I0105 21:31:49.906587 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:49 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:49 crc kubenswrapper[4754]: > Jan 05 21:31:50 crc kubenswrapper[4754]: I0105 21:31:50.387478 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 21:31:50 crc kubenswrapper[4754]: I0105 21:31:50.387785 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 21:31:50 crc kubenswrapper[4754]: I0105 21:31:50.858976 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r5p" event={"ID":"5f585df1-958f-4733-a720-2d37460d2b12","Type":"ContainerStarted","Data":"da79c85e3a9a75e735807d9e034d62162c421fcc0ae3948c893c3c518d56ff8d"} Jan 05 21:31:50 crc kubenswrapper[4754]: I0105 21:31:50.861974 4754 generic.go:334] "Generic (PLEG): container finished" podID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerID="9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9" exitCode=0 Jan 05 21:31:50 crc kubenswrapper[4754]: I0105 21:31:50.862014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerDied","Data":"9188d8f3d09ce99a1657898136234cc41b8ac824abed6024a0e41bdfc7a64fc9"} Jan 05 21:31:51 crc kubenswrapper[4754]: I0105 21:31:51.874456 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n9kmf" event={"ID":"187150cd-d7a9-4dfd-8151-0e6a88e82ddc","Type":"ContainerStarted","Data":"2f7c16f31589857af2e36a0ff8658f4fa4b2388b5cb6226a7f62f5d8d71c9397"} Jan 05 21:31:52 crc kubenswrapper[4754]: I0105 21:31:52.567349 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:54 crc kubenswrapper[4754]: I0105 21:31:54.594917 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" podUID="c82fe254-bc85-4771-b358-017afaff55e9" containerName="oauth-openshift" containerID="cri-o://b644439300225a379828b8227d8178a1729687043266ca303a6bf8041c572bde" gracePeriod=15 Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.632115 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.709116 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.870539 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.924102 4754 generic.go:334] "Generic (PLEG): container finished" podID="c82fe254-bc85-4771-b358-017afaff55e9" containerID="b644439300225a379828b8227d8178a1729687043266ca303a6bf8041c572bde" exitCode=0 Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.924192 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" event={"ID":"c82fe254-bc85-4771-b358-017afaff55e9","Type":"ContainerDied","Data":"b644439300225a379828b8227d8178a1729687043266ca303a6bf8041c572bde"} Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.989453 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:31:55 crc kubenswrapper[4754]: I0105 21:31:55.989565 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.170154 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.170503 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.946431 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nvq9f" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" containerID="cri-o://9fc1fba5ba4125316ae3b86e5a2986d9f30cbd31123327fbea6b5b7ab946d37f" gracePeriod=2 Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.949047 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" event={"ID":"c82fe254-bc85-4771-b358-017afaff55e9","Type":"ContainerStarted","Data":"e7ceed068494506c3f36a01151609624472a2bc4bc1330ab7b6a3643fcd7c537"} Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.949544 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 21:31:56 crc kubenswrapper[4754]: I0105 21:31:56.977229 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58444664d6-99b25" Jan 05 21:31:57 crc kubenswrapper[4754]: I0105 21:31:57.117495 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-j8r5p" podUID="5f585df1-958f-4733-a720-2d37460d2b12" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:57 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:57 crc kubenswrapper[4754]: > Jan 05 21:31:57 crc kubenswrapper[4754]: I0105 21:31:57.218787 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n9kmf" podUID="187150cd-d7a9-4dfd-8151-0e6a88e82ddc" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:57 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:57 crc kubenswrapper[4754]: > Jan 05 21:31:57 crc kubenswrapper[4754]: I0105 21:31:57.957192 4754 generic.go:334] "Generic (PLEG): container finished" podID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerID="9fc1fba5ba4125316ae3b86e5a2986d9f30cbd31123327fbea6b5b7ab946d37f" exitCode=0 Jan 05 21:31:57 crc kubenswrapper[4754]: I0105 21:31:57.957278 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerDied","Data":"9fc1fba5ba4125316ae3b86e5a2986d9f30cbd31123327fbea6b5b7ab946d37f"} Jan 05 21:31:57 crc kubenswrapper[4754]: I0105 21:31:57.959043 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.622516 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.675936 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nzc7\" (UniqueName: \"kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7\") pod \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.676114 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content\") pod \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.676139 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities\") pod \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\" (UID: \"42f4b8f4-dfaf-400f-ac27-55d9ca363057\") " Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.677629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities" (OuterVolumeSpecName: "utilities") pod "42f4b8f4-dfaf-400f-ac27-55d9ca363057" (UID: "42f4b8f4-dfaf-400f-ac27-55d9ca363057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.697627 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7" (OuterVolumeSpecName: "kube-api-access-5nzc7") pod "42f4b8f4-dfaf-400f-ac27-55d9ca363057" (UID: "42f4b8f4-dfaf-400f-ac27-55d9ca363057"). InnerVolumeSpecName "kube-api-access-5nzc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.748608 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42f4b8f4-dfaf-400f-ac27-55d9ca363057" (UID: "42f4b8f4-dfaf-400f-ac27-55d9ca363057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.779637 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nzc7\" (UniqueName: \"kubernetes.io/projected/42f4b8f4-dfaf-400f-ac27-55d9ca363057-kube-api-access-5nzc7\") on node \"crc\" DevicePath \"\"" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.779669 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.779679 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42f4b8f4-dfaf-400f-ac27-55d9ca363057-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.970983 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvq9f" Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.971043 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvq9f" event={"ID":"42f4b8f4-dfaf-400f-ac27-55d9ca363057","Type":"ContainerDied","Data":"56e00cd0e613d7bf48c7f9e0cd4a7b9b288346f94bdec9cada4ace48cffff31e"} Jan 05 21:31:58 crc kubenswrapper[4754]: I0105 21:31:58.971099 4754 scope.go:117] "RemoveContainer" containerID="9fc1fba5ba4125316ae3b86e5a2986d9f30cbd31123327fbea6b5b7ab946d37f" Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.008137 4754 scope.go:117] "RemoveContainer" containerID="4e3bdd514e39c2d6c38f3a7a9c8206aa6fe6155cf0018a0a03cd227771796eaa" Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.043477 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.043765 4754 scope.go:117] "RemoveContainer" containerID="0a703e1d37b48dc36b4d690112c379d4a259532a4500eadf327cd2d2e0c49f3a" Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.059119 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nvq9f"] Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.603837 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" path="/var/lib/kubelet/pods/42f4b8f4-dfaf-400f-ac27-55d9ca363057/volumes" Jan 05 21:31:59 crc kubenswrapper[4754]: I0105 21:31:59.899306 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" probeResult="failure" output=< Jan 05 21:31:59 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:31:59 crc kubenswrapper[4754]: > Jan 05 21:32:00 crc kubenswrapper[4754]: I0105 21:32:00.588615 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:32:00 crc kubenswrapper[4754]: E0105 21:32:00.589477 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:32:02 crc kubenswrapper[4754]: I0105 21:32:02.588934 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:32:03 crc kubenswrapper[4754]: I0105 21:32:03.608195 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-79f77fb8f4-2q9lk" Jan 05 21:32:06 crc kubenswrapper[4754]: I0105 21:32:06.050492 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:32:06 crc kubenswrapper[4754]: I0105 21:32:06.112084 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8r5p" Jan 05 21:32:06 crc kubenswrapper[4754]: I0105 21:32:06.230952 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:32:06 crc kubenswrapper[4754]: I0105 21:32:06.282717 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n9kmf" Jan 05 21:32:07 crc kubenswrapper[4754]: I0105 21:32:07.571229 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:32:08 crc kubenswrapper[4754]: I0105 21:32:08.901419 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:32:08 crc kubenswrapper[4754]: I0105 21:32:08.956350 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:32:12 crc kubenswrapper[4754]: I0105 21:32:12.576433 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:32:14 crc kubenswrapper[4754]: I0105 21:32:14.550540 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:32:14 crc kubenswrapper[4754]: I0105 21:32:14.551448 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97vwr" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" containerID="cri-o://b9638ab4b0e78f836c6e31d0a72372b541881965028cc0dfa14cce28a95660d9" gracePeriod=2 Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.137809 4754 generic.go:334] "Generic (PLEG): container finished" podID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerID="b9638ab4b0e78f836c6e31d0a72372b541881965028cc0dfa14cce28a95660d9" exitCode=0 Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.138075 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerDied","Data":"b9638ab4b0e78f836c6e31d0a72372b541881965028cc0dfa14cce28a95660d9"} Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.375601 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.498542 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content\") pod \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.498656 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdk7w\" (UniqueName: \"kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w\") pod \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.498687 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities\") pod \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\" (UID: \"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c\") " Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.500186 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities" (OuterVolumeSpecName: "utilities") pod "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" (UID: "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.511156 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w" (OuterVolumeSpecName: "kube-api-access-kdk7w") pod "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" (UID: "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c"). InnerVolumeSpecName "kube-api-access-kdk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.589653 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:32:15 crc kubenswrapper[4754]: E0105 21:32:15.590141 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.602050 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdk7w\" (UniqueName: \"kubernetes.io/projected/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-kube-api-access-kdk7w\") on node \"crc\" DevicePath \"\"" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.602090 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.622507 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" (UID: "a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:32:15 crc kubenswrapper[4754]: I0105 21:32:15.704985 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.181234 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97vwr" event={"ID":"a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c","Type":"ContainerDied","Data":"81cc2f8df6a7364b456e4d0cc2f3f6279ae190ff56ce1edab5bfc2a053c82b4b"} Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.181592 4754 scope.go:117] "RemoveContainer" containerID="b9638ab4b0e78f836c6e31d0a72372b541881965028cc0dfa14cce28a95660d9" Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.181720 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97vwr" Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.229877 4754 scope.go:117] "RemoveContainer" containerID="e25192091c4354817118b99dfa3d7b6c7cc88b2d112a30d441572ea278662f20" Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.238066 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.255892 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97vwr"] Jan 05 21:32:16 crc kubenswrapper[4754]: I0105 21:32:16.268236 4754 scope.go:117] "RemoveContainer" containerID="448599f161ae45687b65fa3c7b1f546ff90b45b86b979afb12a32a2047bc4619" Jan 05 21:32:17 crc kubenswrapper[4754]: I0105 21:32:17.574361 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2d71b2d3-db78-4e52-af70-e5108d39502b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:32:17 crc kubenswrapper[4754]: I0105 21:32:17.601642 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" path="/var/lib/kubelet/pods/a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c/volumes" Jan 05 21:32:18 crc kubenswrapper[4754]: I0105 21:32:18.218666 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 05 21:32:18 crc kubenswrapper[4754]: I0105 21:32:18.333097 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 05 21:32:18 crc kubenswrapper[4754]: I0105 21:32:18.756599 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 05 21:32:18 crc kubenswrapper[4754]: I0105 21:32:18.939542 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 05 21:32:22 crc kubenswrapper[4754]: I0105 21:32:22.584968 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 21:32:27 crc kubenswrapper[4754]: I0105 21:32:27.589383 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:32:27 crc kubenswrapper[4754]: E0105 21:32:27.590237 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:32:41 crc kubenswrapper[4754]: I0105 21:32:41.589810 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:32:41 crc kubenswrapper[4754]: E0105 21:32:41.590719 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:32:56 crc kubenswrapper[4754]: I0105 21:32:56.588807 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:32:56 crc kubenswrapper[4754]: I0105 21:32:56.986469 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309"} Jan 05 21:35:18 crc kubenswrapper[4754]: I0105 21:35:18.109543 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:35:18 crc kubenswrapper[4754]: I0105 21:35:18.110862 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:35:48 crc kubenswrapper[4754]: I0105 21:35:48.108846 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:35:48 crc kubenswrapper[4754]: I0105 21:35:48.109445 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.108990 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.109574 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.109618 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.111209 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.111519 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309" gracePeriod=600 Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.341000 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309" exitCode=0 Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.341113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309"} Jan 05 21:36:18 crc kubenswrapper[4754]: I0105 21:36:18.345029 4754 scope.go:117] "RemoveContainer" containerID="83af48b757fa31e7d8bbec5ac045d2ac5b9a51634d67308251b16fb56a7e25b4" Jan 05 21:36:19 crc kubenswrapper[4754]: I0105 21:36:19.363835 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c"} Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.007593 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.008925 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.008943 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.009080 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="extract-content" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009087 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="extract-content" Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.009123 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009136 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.009152 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="extract-utilities" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009157 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="extract-utilities" Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.009183 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="extract-utilities" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009190 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="extract-utilities" Jan 05 21:37:51 crc kubenswrapper[4754]: E0105 21:37:51.009200 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="extract-content" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009206 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="extract-content" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009652 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f4b8f4-dfaf-400f-ac27-55d9ca363057" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.009676 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cb77e6-71b6-4b0b-b043-b9eb23ed6e9c" containerName="registry-server" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.013587 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.033717 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.366465 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.366989 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.367570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrjq\" (UniqueName: \"kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.469369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.469407 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.469745 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrjq\" (UniqueName: \"kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.470519 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.470519 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.494701 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrjq\" (UniqueName: \"kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq\") pod \"community-operators-jzn8f\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:51 crc kubenswrapper[4754]: I0105 21:37:51.659118 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:37:52 crc kubenswrapper[4754]: I0105 21:37:52.305071 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:37:52 crc kubenswrapper[4754]: I0105 21:37:52.508022 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerStarted","Data":"1bfb8a3cd535df9600ec7a34ac4df056bbec88d71aa6a7613c2e02222ab68315"} Jan 05 21:37:53 crc kubenswrapper[4754]: I0105 21:37:53.518509 4754 generic.go:334] "Generic (PLEG): container finished" podID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerID="70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564" exitCode=0 Jan 05 21:37:53 crc kubenswrapper[4754]: I0105 21:37:53.518554 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerDied","Data":"70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564"} Jan 05 21:37:53 crc kubenswrapper[4754]: I0105 21:37:53.525714 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:37:55 crc kubenswrapper[4754]: I0105 21:37:55.538863 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerStarted","Data":"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30"} Jan 05 21:37:56 crc kubenswrapper[4754]: I0105 21:37:56.551797 4754 generic.go:334] "Generic (PLEG): container finished" podID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerID="bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30" exitCode=0 Jan 05 21:37:56 crc kubenswrapper[4754]: I0105 21:37:56.551897 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerDied","Data":"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30"} Jan 05 21:37:58 crc kubenswrapper[4754]: I0105 21:37:58.574683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerStarted","Data":"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917"} Jan 05 21:37:58 crc kubenswrapper[4754]: I0105 21:37:58.594080 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzn8f" podStartSLOduration=4.650475216 podStartE2EDuration="8.59383592s" podCreationTimestamp="2026-01-05 21:37:50 +0000 UTC" firstStartedPulling="2026-01-05 21:37:53.52341612 +0000 UTC m=+5560.232599994" lastFinishedPulling="2026-01-05 21:37:57.466776824 +0000 UTC m=+5564.175960698" observedRunningTime="2026-01-05 21:37:58.590973493 +0000 UTC m=+5565.300157367" watchObservedRunningTime="2026-01-05 21:37:58.59383592 +0000 UTC m=+5565.303019794" Jan 05 21:38:01 crc kubenswrapper[4754]: I0105 21:38:01.659348 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:01 crc kubenswrapper[4754]: I0105 21:38:01.659951 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:01 crc kubenswrapper[4754]: I0105 21:38:01.723704 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:11 crc kubenswrapper[4754]: I0105 21:38:11.718829 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:11 crc kubenswrapper[4754]: I0105 21:38:11.781775 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:38:11 crc kubenswrapper[4754]: I0105 21:38:11.782010 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzn8f" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="registry-server" containerID="cri-o://ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917" gracePeriod=2 Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.395203 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.494614 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content\") pod \"8bb55294-1a47-46dd-be97-d9565d1eb99c\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.494793 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities\") pod \"8bb55294-1a47-46dd-be97-d9565d1eb99c\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.496011 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities" (OuterVolumeSpecName: "utilities") pod "8bb55294-1a47-46dd-be97-d9565d1eb99c" (UID: "8bb55294-1a47-46dd-be97-d9565d1eb99c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.536381 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bb55294-1a47-46dd-be97-d9565d1eb99c" (UID: "8bb55294-1a47-46dd-be97-d9565d1eb99c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.596757 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrjq\" (UniqueName: \"kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq\") pod \"8bb55294-1a47-46dd-be97-d9565d1eb99c\" (UID: \"8bb55294-1a47-46dd-be97-d9565d1eb99c\") " Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.598628 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.598665 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb55294-1a47-46dd-be97-d9565d1eb99c-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.604351 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq" (OuterVolumeSpecName: "kube-api-access-rrrjq") pod "8bb55294-1a47-46dd-be97-d9565d1eb99c" (UID: "8bb55294-1a47-46dd-be97-d9565d1eb99c"). InnerVolumeSpecName "kube-api-access-rrrjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.699724 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrrjq\" (UniqueName: \"kubernetes.io/projected/8bb55294-1a47-46dd-be97-d9565d1eb99c-kube-api-access-rrrjq\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.748653 4754 generic.go:334] "Generic (PLEG): container finished" podID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerID="ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917" exitCode=0 Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.748698 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerDied","Data":"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917"} Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.748726 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzn8f" event={"ID":"8bb55294-1a47-46dd-be97-d9565d1eb99c","Type":"ContainerDied","Data":"1bfb8a3cd535df9600ec7a34ac4df056bbec88d71aa6a7613c2e02222ab68315"} Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.748732 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzn8f" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.748744 4754 scope.go:117] "RemoveContainer" containerID="ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.775340 4754 scope.go:117] "RemoveContainer" containerID="bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.788931 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.799616 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzn8f"] Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.809873 4754 scope.go:117] "RemoveContainer" containerID="70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.865074 4754 scope.go:117] "RemoveContainer" containerID="ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917" Jan 05 21:38:12 crc kubenswrapper[4754]: E0105 21:38:12.865980 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917\": container with ID starting with ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917 not found: ID does not exist" containerID="ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.866193 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917"} err="failed to get container status \"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917\": rpc error: code = NotFound desc = could not find container \"ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917\": container with ID starting with ed68ee5d3899cafea146a8235bbcb940cd4cb3fcbdf76ad4f37296de75848917 not found: ID does not exist" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.866228 4754 scope.go:117] "RemoveContainer" containerID="bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30" Jan 05 21:38:12 crc kubenswrapper[4754]: E0105 21:38:12.866666 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30\": container with ID starting with bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30 not found: ID does not exist" containerID="bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.866709 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30"} err="failed to get container status \"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30\": rpc error: code = NotFound desc = could not find container \"bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30\": container with ID starting with bf87c132b7c567b093f196f42c5094ac8bd300b83cddc6783a5224daa8f24f30 not found: ID does not exist" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.866733 4754 scope.go:117] "RemoveContainer" containerID="70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564" Jan 05 21:38:12 crc kubenswrapper[4754]: E0105 21:38:12.867025 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564\": container with ID starting with 70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564 not found: ID does not exist" containerID="70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564" Jan 05 21:38:12 crc kubenswrapper[4754]: I0105 21:38:12.867062 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564"} err="failed to get container status \"70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564\": rpc error: code = NotFound desc = could not find container \"70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564\": container with ID starting with 70090ceebb586e323acc2b992534babb90e2609c0a7a8514313f0eda4345f564 not found: ID does not exist" Jan 05 21:38:13 crc kubenswrapper[4754]: I0105 21:38:13.621630 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" path="/var/lib/kubelet/pods/8bb55294-1a47-46dd-be97-d9565d1eb99c/volumes" Jan 05 21:38:18 crc kubenswrapper[4754]: I0105 21:38:18.109358 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:38:18 crc kubenswrapper[4754]: I0105 21:38:18.109927 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:38:48 crc kubenswrapper[4754]: I0105 21:38:48.110143 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:38:48 crc kubenswrapper[4754]: I0105 21:38:48.111746 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.360545 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:38:50 crc kubenswrapper[4754]: E0105 21:38:50.361650 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="registry-server" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.361673 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="registry-server" Jan 05 21:38:50 crc kubenswrapper[4754]: E0105 21:38:50.361723 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="extract-content" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.361737 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="extract-content" Jan 05 21:38:50 crc kubenswrapper[4754]: E0105 21:38:50.361807 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="extract-utilities" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.361821 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="extract-utilities" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.362761 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb55294-1a47-46dd-be97-d9565d1eb99c" containerName="registry-server" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.367199 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.383871 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.482196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.482537 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrpgw\" (UniqueName: \"kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.482781 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.585012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.585080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrpgw\" (UniqueName: \"kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.585345 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.585785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:50 crc kubenswrapper[4754]: I0105 21:38:50.585923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:51 crc kubenswrapper[4754]: I0105 21:38:51.059366 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrpgw\" (UniqueName: \"kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw\") pod \"redhat-marketplace-m2x5k\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:51 crc kubenswrapper[4754]: I0105 21:38:51.306406 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:38:51 crc kubenswrapper[4754]: I0105 21:38:51.785909 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:38:51 crc kubenswrapper[4754]: W0105 21:38:51.788109 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba75be77_08d4_45aa_9d94_06b34121179f.slice/crio-9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a WatchSource:0}: Error finding container 9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a: Status 404 returned error can't find the container with id 9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a Jan 05 21:38:52 crc kubenswrapper[4754]: I0105 21:38:52.170535 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba75be77-08d4-45aa-9d94-06b34121179f" containerID="b842cfdc3644d6abd53788475149fd882c5e6a04683b3e695177aa2e12a3a678" exitCode=0 Jan 05 21:38:52 crc kubenswrapper[4754]: I0105 21:38:52.170640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerDied","Data":"b842cfdc3644d6abd53788475149fd882c5e6a04683b3e695177aa2e12a3a678"} Jan 05 21:38:52 crc kubenswrapper[4754]: I0105 21:38:52.170828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerStarted","Data":"9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a"} Jan 05 21:38:54 crc kubenswrapper[4754]: I0105 21:38:54.194858 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerStarted","Data":"fd32d4587404c013171231b67652f27a277937bdd8742a1101164b727d1c07ea"} Jan 05 21:38:55 crc kubenswrapper[4754]: I0105 21:38:55.209987 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba75be77-08d4-45aa-9d94-06b34121179f" containerID="fd32d4587404c013171231b67652f27a277937bdd8742a1101164b727d1c07ea" exitCode=0 Jan 05 21:38:55 crc kubenswrapper[4754]: I0105 21:38:55.210058 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerDied","Data":"fd32d4587404c013171231b67652f27a277937bdd8742a1101164b727d1c07ea"} Jan 05 21:38:56 crc kubenswrapper[4754]: I0105 21:38:56.228200 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerStarted","Data":"30e8ae6e88d2ecb0cc2220bf9d1a2e5e5a20cd02442164e64b3a698aee923863"} Jan 05 21:38:56 crc kubenswrapper[4754]: I0105 21:38:56.259400 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2x5k" podStartSLOduration=2.787165173 podStartE2EDuration="6.259377663s" podCreationTimestamp="2026-01-05 21:38:50 +0000 UTC" firstStartedPulling="2026-01-05 21:38:52.172817811 +0000 UTC m=+5618.882001685" lastFinishedPulling="2026-01-05 21:38:55.645030291 +0000 UTC m=+5622.354214175" observedRunningTime="2026-01-05 21:38:56.247699758 +0000 UTC m=+5622.956883632" watchObservedRunningTime="2026-01-05 21:38:56.259377663 +0000 UTC m=+5622.968561537" Jan 05 21:39:01 crc kubenswrapper[4754]: I0105 21:39:01.307166 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:01 crc kubenswrapper[4754]: I0105 21:39:01.307729 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:01 crc kubenswrapper[4754]: I0105 21:39:01.355042 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:02 crc kubenswrapper[4754]: I0105 21:39:02.358815 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:02 crc kubenswrapper[4754]: I0105 21:39:02.406277 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:39:04 crc kubenswrapper[4754]: I0105 21:39:04.310801 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2x5k" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="registry-server" containerID="cri-o://30e8ae6e88d2ecb0cc2220bf9d1a2e5e5a20cd02442164e64b3a698aee923863" gracePeriod=2 Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.327751 4754 generic.go:334] "Generic (PLEG): container finished" podID="ba75be77-08d4-45aa-9d94-06b34121179f" containerID="30e8ae6e88d2ecb0cc2220bf9d1a2e5e5a20cd02442164e64b3a698aee923863" exitCode=0 Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.327830 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerDied","Data":"30e8ae6e88d2ecb0cc2220bf9d1a2e5e5a20cd02442164e64b3a698aee923863"} Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.328544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2x5k" event={"ID":"ba75be77-08d4-45aa-9d94-06b34121179f","Type":"ContainerDied","Data":"9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a"} Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.328571 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9943b5551457bd6d5807246e8cc8adb1f14fb3bc8df8d14b6eb85ade241f4b0a" Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.406468 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.520420 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrpgw\" (UniqueName: \"kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw\") pod \"ba75be77-08d4-45aa-9d94-06b34121179f\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.520744 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content\") pod \"ba75be77-08d4-45aa-9d94-06b34121179f\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.520868 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities\") pod \"ba75be77-08d4-45aa-9d94-06b34121179f\" (UID: \"ba75be77-08d4-45aa-9d94-06b34121179f\") " Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.523725 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities" (OuterVolumeSpecName: "utilities") pod "ba75be77-08d4-45aa-9d94-06b34121179f" (UID: "ba75be77-08d4-45aa-9d94-06b34121179f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.542991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba75be77-08d4-45aa-9d94-06b34121179f" (UID: "ba75be77-08d4-45aa-9d94-06b34121179f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.623757 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:05 crc kubenswrapper[4754]: I0105 21:39:05.623787 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba75be77-08d4-45aa-9d94-06b34121179f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:06 crc kubenswrapper[4754]: I0105 21:39:06.052244 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw" (OuterVolumeSpecName: "kube-api-access-hrpgw") pod "ba75be77-08d4-45aa-9d94-06b34121179f" (UID: "ba75be77-08d4-45aa-9d94-06b34121179f"). InnerVolumeSpecName "kube-api-access-hrpgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:39:06 crc kubenswrapper[4754]: I0105 21:39:06.137185 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrpgw\" (UniqueName: \"kubernetes.io/projected/ba75be77-08d4-45aa-9d94-06b34121179f-kube-api-access-hrpgw\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:06 crc kubenswrapper[4754]: I0105 21:39:06.339947 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2x5k" Jan 05 21:39:06 crc kubenswrapper[4754]: I0105 21:39:06.375598 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:39:06 crc kubenswrapper[4754]: I0105 21:39:06.390851 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2x5k"] Jan 05 21:39:07 crc kubenswrapper[4754]: I0105 21:39:07.600068 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" path="/var/lib/kubelet/pods/ba75be77-08d4-45aa-9d94-06b34121179f/volumes" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.109363 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.109877 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.109931 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.110959 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.111040 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" gracePeriod=600 Jan 05 21:39:18 crc kubenswrapper[4754]: E0105 21:39:18.236779 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.497863 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" exitCode=0 Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.497947 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c"} Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.498011 4754 scope.go:117] "RemoveContainer" containerID="dfed6d366f02b34fc9de6efc5fc33958d5ae0df54ad0b1592d25df209f0e1309" Jan 05 21:39:18 crc kubenswrapper[4754]: I0105 21:39:18.499896 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:39:18 crc kubenswrapper[4754]: E0105 21:39:18.501134 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:39:29 crc kubenswrapper[4754]: I0105 21:39:29.589207 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:39:29 crc kubenswrapper[4754]: E0105 21:39:29.590022 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:39:42 crc kubenswrapper[4754]: I0105 21:39:42.589244 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:39:42 crc kubenswrapper[4754]: E0105 21:39:42.590680 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:39:55 crc kubenswrapper[4754]: I0105 21:39:55.588274 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:39:55 crc kubenswrapper[4754]: E0105 21:39:55.589274 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:40:07 crc kubenswrapper[4754]: I0105 21:40:07.590178 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:40:07 crc kubenswrapper[4754]: E0105 21:40:07.591601 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:40:18 crc kubenswrapper[4754]: I0105 21:40:18.589390 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:40:18 crc kubenswrapper[4754]: E0105 21:40:18.590806 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:40:33 crc kubenswrapper[4754]: I0105 21:40:33.596804 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:40:33 crc kubenswrapper[4754]: E0105 21:40:33.597730 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:40:45 crc kubenswrapper[4754]: I0105 21:40:45.588045 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:40:45 crc kubenswrapper[4754]: E0105 21:40:45.589028 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:40:56 crc kubenswrapper[4754]: I0105 21:40:56.588950 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:40:56 crc kubenswrapper[4754]: E0105 21:40:56.589743 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:41:08 crc kubenswrapper[4754]: I0105 21:41:08.588245 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:41:08 crc kubenswrapper[4754]: E0105 21:41:08.588998 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:41:21 crc kubenswrapper[4754]: I0105 21:41:21.590431 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:41:21 crc kubenswrapper[4754]: E0105 21:41:21.591494 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:41:32 crc kubenswrapper[4754]: I0105 21:41:32.588772 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:41:32 crc kubenswrapper[4754]: E0105 21:41:32.589684 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:41:45 crc kubenswrapper[4754]: I0105 21:41:45.588892 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:41:45 crc kubenswrapper[4754]: E0105 21:41:45.589799 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.051930 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:41:49 crc kubenswrapper[4754]: E0105 21:41:49.053072 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="extract-utilities" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.053092 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="extract-utilities" Jan 05 21:41:49 crc kubenswrapper[4754]: E0105 21:41:49.053126 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="extract-content" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.053136 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="extract-content" Jan 05 21:41:49 crc kubenswrapper[4754]: E0105 21:41:49.053166 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="registry-server" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.053174 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="registry-server" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.053497 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba75be77-08d4-45aa-9d94-06b34121179f" containerName="registry-server" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.055591 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.064586 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.239062 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxkx\" (UniqueName: \"kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.239263 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.239714 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.342114 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.342254 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.342356 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkxkx\" (UniqueName: \"kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.343054 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.343450 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.372952 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkxkx\" (UniqueName: \"kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx\") pod \"certified-operators-rq6zt\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.381071 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:49 crc kubenswrapper[4754]: I0105 21:41:49.973614 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:41:50 crc kubenswrapper[4754]: I0105 21:41:50.472554 4754 generic.go:334] "Generic (PLEG): container finished" podID="abe83f88-aca5-4827-add5-5e9732c508cc" containerID="c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376" exitCode=0 Jan 05 21:41:50 crc kubenswrapper[4754]: I0105 21:41:50.472833 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerDied","Data":"c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376"} Jan 05 21:41:50 crc kubenswrapper[4754]: I0105 21:41:50.473273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerStarted","Data":"a02ebefb57c655fc1b4144d49d4cae1dfbf5308f96bddb31df9d9d49aec0afc3"} Jan 05 21:41:52 crc kubenswrapper[4754]: I0105 21:41:52.497899 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerStarted","Data":"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd"} Jan 05 21:41:53 crc kubenswrapper[4754]: I0105 21:41:53.927024 4754 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.027404709s: [/var/lib/containers/storage/overlay/3966b286344dc527d8a8d1d7270067409fcad380f915847618a275d3df05a000/diff /var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/ceilometer-notification-agent/0.log]; will not log again for this container unless duration exceeds 2s Jan 05 21:41:53 crc kubenswrapper[4754]: I0105 21:41:53.977064 4754 generic.go:334] "Generic (PLEG): container finished" podID="abe83f88-aca5-4827-add5-5e9732c508cc" containerID="31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd" exitCode=0 Jan 05 21:41:53 crc kubenswrapper[4754]: I0105 21:41:53.984782 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerDied","Data":"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd"} Jan 05 21:41:54 crc kubenswrapper[4754]: I0105 21:41:54.991135 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerStarted","Data":"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0"} Jan 05 21:41:55 crc kubenswrapper[4754]: I0105 21:41:55.023030 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rq6zt" podStartSLOduration=2.059959835 podStartE2EDuration="6.023006479s" podCreationTimestamp="2026-01-05 21:41:49 +0000 UTC" firstStartedPulling="2026-01-05 21:41:50.476180366 +0000 UTC m=+5797.185364240" lastFinishedPulling="2026-01-05 21:41:54.439227 +0000 UTC m=+5801.148410884" observedRunningTime="2026-01-05 21:41:55.008545469 +0000 UTC m=+5801.717729343" watchObservedRunningTime="2026-01-05 21:41:55.023006479 +0000 UTC m=+5801.732190353" Jan 05 21:41:59 crc kubenswrapper[4754]: I0105 21:41:59.382127 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:59 crc kubenswrapper[4754]: I0105 21:41:59.382716 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:59 crc kubenswrapper[4754]: I0105 21:41:59.454990 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:41:59 crc kubenswrapper[4754]: I0105 21:41:59.590093 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:41:59 crc kubenswrapper[4754]: E0105 21:41:59.596196 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:42:00 crc kubenswrapper[4754]: I0105 21:42:00.095742 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:42:00 crc kubenswrapper[4754]: I0105 21:42:00.144949 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.069539 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rq6zt" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="registry-server" containerID="cri-o://866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0" gracePeriod=2 Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.595608 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.685674 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkxkx\" (UniqueName: \"kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx\") pod \"abe83f88-aca5-4827-add5-5e9732c508cc\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.685858 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content\") pod \"abe83f88-aca5-4827-add5-5e9732c508cc\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.686029 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities\") pod \"abe83f88-aca5-4827-add5-5e9732c508cc\" (UID: \"abe83f88-aca5-4827-add5-5e9732c508cc\") " Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.688390 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities" (OuterVolumeSpecName: "utilities") pod "abe83f88-aca5-4827-add5-5e9732c508cc" (UID: "abe83f88-aca5-4827-add5-5e9732c508cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.696116 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx" (OuterVolumeSpecName: "kube-api-access-bkxkx") pod "abe83f88-aca5-4827-add5-5e9732c508cc" (UID: "abe83f88-aca5-4827-add5-5e9732c508cc"). InnerVolumeSpecName "kube-api-access-bkxkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.750629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abe83f88-aca5-4827-add5-5e9732c508cc" (UID: "abe83f88-aca5-4827-add5-5e9732c508cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.788731 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.788797 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkxkx\" (UniqueName: \"kubernetes.io/projected/abe83f88-aca5-4827-add5-5e9732c508cc-kube-api-access-bkxkx\") on node \"crc\" DevicePath \"\"" Jan 05 21:42:02 crc kubenswrapper[4754]: I0105 21:42:02.788815 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe83f88-aca5-4827-add5-5e9732c508cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.088418 4754 generic.go:334] "Generic (PLEG): container finished" podID="abe83f88-aca5-4827-add5-5e9732c508cc" containerID="866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0" exitCode=0 Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.088507 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerDied","Data":"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0"} Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.088539 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq6zt" event={"ID":"abe83f88-aca5-4827-add5-5e9732c508cc","Type":"ContainerDied","Data":"a02ebefb57c655fc1b4144d49d4cae1dfbf5308f96bddb31df9d9d49aec0afc3"} Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.088560 4754 scope.go:117] "RemoveContainer" containerID="866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.088833 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq6zt" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.134772 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.135858 4754 scope.go:117] "RemoveContainer" containerID="31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.149085 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rq6zt"] Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.157371 4754 scope.go:117] "RemoveContainer" containerID="c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.212424 4754 scope.go:117] "RemoveContainer" containerID="866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0" Jan 05 21:42:03 crc kubenswrapper[4754]: E0105 21:42:03.212832 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0\": container with ID starting with 866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0 not found: ID does not exist" containerID="866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.212880 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0"} err="failed to get container status \"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0\": rpc error: code = NotFound desc = could not find container \"866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0\": container with ID starting with 866c5ce58f47b60c27e9d8a0978fdc80046d8463ab6a3f5c86a712ff62d45fa0 not found: ID does not exist" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.212911 4754 scope.go:117] "RemoveContainer" containerID="31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd" Jan 05 21:42:03 crc kubenswrapper[4754]: E0105 21:42:03.213213 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd\": container with ID starting with 31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd not found: ID does not exist" containerID="31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.213245 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd"} err="failed to get container status \"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd\": rpc error: code = NotFound desc = could not find container \"31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd\": container with ID starting with 31a4a93f9df4485b0f0e9764b3927e19cc60635bae821656c90dc58b9a4709dd not found: ID does not exist" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.213266 4754 scope.go:117] "RemoveContainer" containerID="c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376" Jan 05 21:42:03 crc kubenswrapper[4754]: E0105 21:42:03.213595 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376\": container with ID starting with c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376 not found: ID does not exist" containerID="c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.213615 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376"} err="failed to get container status \"c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376\": rpc error: code = NotFound desc = could not find container \"c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376\": container with ID starting with c705cee3a4f2947168d0f260e944abb1416058fb91b31578404a31dfecd65376 not found: ID does not exist" Jan 05 21:42:03 crc kubenswrapper[4754]: I0105 21:42:03.601684 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" path="/var/lib/kubelet/pods/abe83f88-aca5-4827-add5-5e9732c508cc/volumes" Jan 05 21:42:13 crc kubenswrapper[4754]: I0105 21:42:13.597600 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:42:13 crc kubenswrapper[4754]: E0105 21:42:13.598451 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:42:24 crc kubenswrapper[4754]: I0105 21:42:24.589908 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:42:24 crc kubenswrapper[4754]: E0105 21:42:24.592126 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:42:37 crc kubenswrapper[4754]: I0105 21:42:37.588393 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:42:37 crc kubenswrapper[4754]: E0105 21:42:37.589390 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:42:52 crc kubenswrapper[4754]: I0105 21:42:52.588609 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:42:52 crc kubenswrapper[4754]: E0105 21:42:52.589396 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:43:06 crc kubenswrapper[4754]: I0105 21:43:06.588438 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:43:06 crc kubenswrapper[4754]: E0105 21:43:06.589190 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:43:21 crc kubenswrapper[4754]: I0105 21:43:21.589186 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:43:21 crc kubenswrapper[4754]: E0105 21:43:21.590214 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.383054 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:30 crc kubenswrapper[4754]: E0105 21:43:30.387100 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="extract-content" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.387132 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="extract-content" Jan 05 21:43:30 crc kubenswrapper[4754]: E0105 21:43:30.387182 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="extract-utilities" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.387191 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="extract-utilities" Jan 05 21:43:30 crc kubenswrapper[4754]: E0105 21:43:30.387365 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="registry-server" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.387384 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="registry-server" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.387738 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe83f88-aca5-4827-add5-5e9732c508cc" containerName="registry-server" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.391011 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.412467 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.553521 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.553579 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.553665 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wz8\" (UniqueName: \"kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.655503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98wz8\" (UniqueName: \"kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.655674 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.655703 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.656489 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.657005 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.682728 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98wz8\" (UniqueName: \"kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8\") pod \"redhat-operators-wllmm\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:30 crc kubenswrapper[4754]: I0105 21:43:30.711473 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:31 crc kubenswrapper[4754]: W0105 21:43:31.214625 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea00662_474d_4ca3_a0a0_669e36664962.slice/crio-bde0557de4bdca78ccf1c99d09be00ad1a86c555f9f1627424aa47d5d6c43c04 WatchSource:0}: Error finding container bde0557de4bdca78ccf1c99d09be00ad1a86c555f9f1627424aa47d5d6c43c04: Status 404 returned error can't find the container with id bde0557de4bdca78ccf1c99d09be00ad1a86c555f9f1627424aa47d5d6c43c04 Jan 05 21:43:31 crc kubenswrapper[4754]: I0105 21:43:31.228023 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:32 crc kubenswrapper[4754]: I0105 21:43:32.140698 4754 generic.go:334] "Generic (PLEG): container finished" podID="4ea00662-474d-4ca3-a0a0-669e36664962" containerID="663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db" exitCode=0 Jan 05 21:43:32 crc kubenswrapper[4754]: I0105 21:43:32.140755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerDied","Data":"663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db"} Jan 05 21:43:32 crc kubenswrapper[4754]: I0105 21:43:32.141027 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerStarted","Data":"bde0557de4bdca78ccf1c99d09be00ad1a86c555f9f1627424aa47d5d6c43c04"} Jan 05 21:43:32 crc kubenswrapper[4754]: I0105 21:43:32.143086 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:43:34 crc kubenswrapper[4754]: I0105 21:43:34.174937 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerStarted","Data":"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137"} Jan 05 21:43:36 crc kubenswrapper[4754]: I0105 21:43:36.589113 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:43:36 crc kubenswrapper[4754]: E0105 21:43:36.589959 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:43:37 crc kubenswrapper[4754]: I0105 21:43:37.209976 4754 generic.go:334] "Generic (PLEG): container finished" podID="4ea00662-474d-4ca3-a0a0-669e36664962" containerID="2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137" exitCode=0 Jan 05 21:43:37 crc kubenswrapper[4754]: I0105 21:43:37.210112 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerDied","Data":"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137"} Jan 05 21:43:39 crc kubenswrapper[4754]: I0105 21:43:39.233440 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerStarted","Data":"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d"} Jan 05 21:43:39 crc kubenswrapper[4754]: I0105 21:43:39.260204 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wllmm" podStartSLOduration=3.420294844 podStartE2EDuration="9.260184517s" podCreationTimestamp="2026-01-05 21:43:30 +0000 UTC" firstStartedPulling="2026-01-05 21:43:32.14285023 +0000 UTC m=+5898.852034104" lastFinishedPulling="2026-01-05 21:43:37.982739883 +0000 UTC m=+5904.691923777" observedRunningTime="2026-01-05 21:43:39.251859008 +0000 UTC m=+5905.961042892" watchObservedRunningTime="2026-01-05 21:43:39.260184517 +0000 UTC m=+5905.969368391" Jan 05 21:43:40 crc kubenswrapper[4754]: I0105 21:43:40.712519 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:40 crc kubenswrapper[4754]: I0105 21:43:40.712828 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:41 crc kubenswrapper[4754]: I0105 21:43:41.758853 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wllmm" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="registry-server" probeResult="failure" output=< Jan 05 21:43:41 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:43:41 crc kubenswrapper[4754]: > Jan 05 21:43:49 crc kubenswrapper[4754]: I0105 21:43:49.589070 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:43:49 crc kubenswrapper[4754]: E0105 21:43:49.590811 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:43:50 crc kubenswrapper[4754]: I0105 21:43:50.763033 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:50 crc kubenswrapper[4754]: I0105 21:43:50.835054 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:51 crc kubenswrapper[4754]: I0105 21:43:51.009496 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:52 crc kubenswrapper[4754]: I0105 21:43:52.383923 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wllmm" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="registry-server" containerID="cri-o://566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d" gracePeriod=2 Jan 05 21:43:52 crc kubenswrapper[4754]: I0105 21:43:52.973580 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.046666 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content\") pod \"4ea00662-474d-4ca3-a0a0-669e36664962\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.047244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98wz8\" (UniqueName: \"kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8\") pod \"4ea00662-474d-4ca3-a0a0-669e36664962\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.047338 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities\") pod \"4ea00662-474d-4ca3-a0a0-669e36664962\" (UID: \"4ea00662-474d-4ca3-a0a0-669e36664962\") " Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.047948 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities" (OuterVolumeSpecName: "utilities") pod "4ea00662-474d-4ca3-a0a0-669e36664962" (UID: "4ea00662-474d-4ca3-a0a0-669e36664962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.048097 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.053636 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8" (OuterVolumeSpecName: "kube-api-access-98wz8") pod "4ea00662-474d-4ca3-a0a0-669e36664962" (UID: "4ea00662-474d-4ca3-a0a0-669e36664962"). InnerVolumeSpecName "kube-api-access-98wz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.150519 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98wz8\" (UniqueName: \"kubernetes.io/projected/4ea00662-474d-4ca3-a0a0-669e36664962-kube-api-access-98wz8\") on node \"crc\" DevicePath \"\"" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.166106 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ea00662-474d-4ca3-a0a0-669e36664962" (UID: "4ea00662-474d-4ca3-a0a0-669e36664962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.251657 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea00662-474d-4ca3-a0a0-669e36664962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.402263 4754 generic.go:334] "Generic (PLEG): container finished" podID="4ea00662-474d-4ca3-a0a0-669e36664962" containerID="566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d" exitCode=0 Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.402328 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerDied","Data":"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d"} Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.402357 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wllmm" event={"ID":"4ea00662-474d-4ca3-a0a0-669e36664962","Type":"ContainerDied","Data":"bde0557de4bdca78ccf1c99d09be00ad1a86c555f9f1627424aa47d5d6c43c04"} Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.402375 4754 scope.go:117] "RemoveContainer" containerID="566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.402411 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wllmm" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.434193 4754 scope.go:117] "RemoveContainer" containerID="2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.452656 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.471160 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wllmm"] Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.477557 4754 scope.go:117] "RemoveContainer" containerID="663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.541643 4754 scope.go:117] "RemoveContainer" containerID="566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d" Jan 05 21:43:53 crc kubenswrapper[4754]: E0105 21:43:53.542351 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d\": container with ID starting with 566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d not found: ID does not exist" containerID="566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.542400 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d"} err="failed to get container status \"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d\": rpc error: code = NotFound desc = could not find container \"566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d\": container with ID starting with 566dcd3e3fb6dd86bdcd5dc205377bff45fdbbfcfc5ef56472eeb0faa09c337d not found: ID does not exist" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.542431 4754 scope.go:117] "RemoveContainer" containerID="2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137" Jan 05 21:43:53 crc kubenswrapper[4754]: E0105 21:43:53.542867 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137\": container with ID starting with 2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137 not found: ID does not exist" containerID="2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.542902 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137"} err="failed to get container status \"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137\": rpc error: code = NotFound desc = could not find container \"2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137\": container with ID starting with 2d8877b2b6d85b5b30dd89329135a5917bd0aa169f1422418700ee4873e4f137 not found: ID does not exist" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.542921 4754 scope.go:117] "RemoveContainer" containerID="663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db" Jan 05 21:43:53 crc kubenswrapper[4754]: E0105 21:43:53.543330 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db\": container with ID starting with 663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db not found: ID does not exist" containerID="663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.543471 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db"} err="failed to get container status \"663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db\": rpc error: code = NotFound desc = could not find container \"663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db\": container with ID starting with 663ef3ad695a5d61ec534dedb96dce74c3c03dc9e8e092247d966cc40457f7db not found: ID does not exist" Jan 05 21:43:53 crc kubenswrapper[4754]: I0105 21:43:53.606318 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" path="/var/lib/kubelet/pods/4ea00662-474d-4ca3-a0a0-669e36664962/volumes" Jan 05 21:44:04 crc kubenswrapper[4754]: I0105 21:44:04.589805 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:44:04 crc kubenswrapper[4754]: E0105 21:44:04.592091 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:44:16 crc kubenswrapper[4754]: I0105 21:44:16.589693 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:44:16 crc kubenswrapper[4754]: E0105 21:44:16.590655 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:44:28 crc kubenswrapper[4754]: I0105 21:44:28.588991 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:44:29 crc kubenswrapper[4754]: I0105 21:44:29.884961 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44"} Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.238316 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9"] Jan 05 21:45:00 crc kubenswrapper[4754]: E0105 21:45:00.239619 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="extract-content" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.239635 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="extract-content" Jan 05 21:45:00 crc kubenswrapper[4754]: E0105 21:45:00.239678 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="extract-utilities" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.239686 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="extract-utilities" Jan 05 21:45:00 crc kubenswrapper[4754]: E0105 21:45:00.239699 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="registry-server" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.239705 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="registry-server" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.239939 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea00662-474d-4ca3-a0a0-669e36664962" containerName="registry-server" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.240978 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.246309 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.246307 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.266030 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9"] Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.383674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.383828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69psc\" (UniqueName: \"kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.384070 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.487128 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.487212 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69psc\" (UniqueName: \"kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.487283 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.488236 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.494446 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.504933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69psc\" (UniqueName: \"kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc\") pod \"collect-profiles-29460825-dlkt9\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:00 crc kubenswrapper[4754]: I0105 21:45:00.582771 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:01 crc kubenswrapper[4754]: I0105 21:45:01.076311 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9"] Jan 05 21:45:01 crc kubenswrapper[4754]: I0105 21:45:01.279725 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" event={"ID":"0a6a591e-6548-4222-8238-2e40d37167bd","Type":"ContainerStarted","Data":"3fb291bc7e6a1cf6276fa952327a663d18ccc007a7610e0a8f23378f51a77dc0"} Jan 05 21:45:01 crc kubenswrapper[4754]: I0105 21:45:01.280052 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" event={"ID":"0a6a591e-6548-4222-8238-2e40d37167bd","Type":"ContainerStarted","Data":"2ccc435dad87bc909e6361132aed6518dc53550139823235cff656d01202f60b"} Jan 05 21:45:01 crc kubenswrapper[4754]: I0105 21:45:01.294977 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" podStartSLOduration=1.294954218 podStartE2EDuration="1.294954218s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:45:01.292972686 +0000 UTC m=+5988.002156600" watchObservedRunningTime="2026-01-05 21:45:01.294954218 +0000 UTC m=+5988.004138092" Jan 05 21:45:02 crc kubenswrapper[4754]: I0105 21:45:02.291612 4754 generic.go:334] "Generic (PLEG): container finished" podID="0a6a591e-6548-4222-8238-2e40d37167bd" containerID="3fb291bc7e6a1cf6276fa952327a663d18ccc007a7610e0a8f23378f51a77dc0" exitCode=0 Jan 05 21:45:02 crc kubenswrapper[4754]: I0105 21:45:02.291673 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" event={"ID":"0a6a591e-6548-4222-8238-2e40d37167bd","Type":"ContainerDied","Data":"3fb291bc7e6a1cf6276fa952327a663d18ccc007a7610e0a8f23378f51a77dc0"} Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.710577 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.768401 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69psc\" (UniqueName: \"kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc\") pod \"0a6a591e-6548-4222-8238-2e40d37167bd\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.768666 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume\") pod \"0a6a591e-6548-4222-8238-2e40d37167bd\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.768768 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume\") pod \"0a6a591e-6548-4222-8238-2e40d37167bd\" (UID: \"0a6a591e-6548-4222-8238-2e40d37167bd\") " Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.770404 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a6a591e-6548-4222-8238-2e40d37167bd" (UID: "0a6a591e-6548-4222-8238-2e40d37167bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.777042 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc" (OuterVolumeSpecName: "kube-api-access-69psc") pod "0a6a591e-6548-4222-8238-2e40d37167bd" (UID: "0a6a591e-6548-4222-8238-2e40d37167bd"). InnerVolumeSpecName "kube-api-access-69psc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.778644 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a6a591e-6548-4222-8238-2e40d37167bd" (UID: "0a6a591e-6548-4222-8238-2e40d37167bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.871805 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69psc\" (UniqueName: \"kubernetes.io/projected/0a6a591e-6548-4222-8238-2e40d37167bd-kube-api-access-69psc\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.871843 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6a591e-6548-4222-8238-2e40d37167bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:03 crc kubenswrapper[4754]: I0105 21:45:03.871857 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6a591e-6548-4222-8238-2e40d37167bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:04 crc kubenswrapper[4754]: I0105 21:45:04.320373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" event={"ID":"0a6a591e-6548-4222-8238-2e40d37167bd","Type":"ContainerDied","Data":"2ccc435dad87bc909e6361132aed6518dc53550139823235cff656d01202f60b"} Jan 05 21:45:04 crc kubenswrapper[4754]: I0105 21:45:04.320691 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ccc435dad87bc909e6361132aed6518dc53550139823235cff656d01202f60b" Jan 05 21:45:04 crc kubenswrapper[4754]: I0105 21:45:04.320776 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-dlkt9" Jan 05 21:45:04 crc kubenswrapper[4754]: I0105 21:45:04.380108 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm"] Jan 05 21:45:04 crc kubenswrapper[4754]: I0105 21:45:04.392024 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460780-b68fm"] Jan 05 21:45:05 crc kubenswrapper[4754]: I0105 21:45:05.603212 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3a9164-7ae2-4f53-82a3-d2792d25b5d0" path="/var/lib/kubelet/pods/ce3a9164-7ae2-4f53-82a3-d2792d25b5d0/volumes" Jan 05 21:45:13 crc kubenswrapper[4754]: I0105 21:45:13.294629 4754 scope.go:117] "RemoveContainer" containerID="5d656e64bc8d9068117d865479039cbe4a70e78b2122398fe4dc98b5d2c0d93b" Jan 05 21:45:13 crc kubenswrapper[4754]: I0105 21:45:13.329761 4754 scope.go:117] "RemoveContainer" containerID="fd32d4587404c013171231b67652f27a277937bdd8742a1101164b727d1c07ea" Jan 05 21:45:13 crc kubenswrapper[4754]: I0105 21:45:13.414748 4754 scope.go:117] "RemoveContainer" containerID="b842cfdc3644d6abd53788475149fd882c5e6a04683b3e695177aa2e12a3a678" Jan 05 21:45:13 crc kubenswrapper[4754]: I0105 21:45:13.477341 4754 scope.go:117] "RemoveContainer" containerID="30e8ae6e88d2ecb0cc2220bf9d1a2e5e5a20cd02442164e64b3a698aee923863" Jan 05 21:46:48 crc kubenswrapper[4754]: I0105 21:46:48.110042 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:46:48 crc kubenswrapper[4754]: I0105 21:46:48.111009 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:47:18 crc kubenswrapper[4754]: I0105 21:47:18.109288 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:47:18 crc kubenswrapper[4754]: I0105 21:47:18.110118 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.108819 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.109347 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.109400 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.110303 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.110361 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44" gracePeriod=600 Jan 05 21:47:48 crc kubenswrapper[4754]: E0105 21:47:48.152214 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:47:48 crc kubenswrapper[4754]: E0105 21:47:48.152303 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.448107 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44" exitCode=0 Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.448175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44"} Jan 05 21:47:48 crc kubenswrapper[4754]: I0105 21:47:48.448722 4754 scope.go:117] "RemoveContainer" containerID="c1852f8ccbc2c749039fa17988ca08439ce35c6cee4df91b94564cd0b2182e4c" Jan 05 21:47:49 crc kubenswrapper[4754]: I0105 21:47:49.466943 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6"} Jan 05 21:47:57 crc kubenswrapper[4754]: I0105 21:47:57.968259 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:47:57 crc kubenswrapper[4754]: E0105 21:47:57.969869 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6a591e-6548-4222-8238-2e40d37167bd" containerName="collect-profiles" Jan 05 21:47:57 crc kubenswrapper[4754]: I0105 21:47:57.969887 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6a591e-6548-4222-8238-2e40d37167bd" containerName="collect-profiles" Jan 05 21:47:57 crc kubenswrapper[4754]: I0105 21:47:57.970143 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6a591e-6548-4222-8238-2e40d37167bd" containerName="collect-profiles" Jan 05 21:47:57 crc kubenswrapper[4754]: I0105 21:47:57.971856 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:57 crc kubenswrapper[4754]: I0105 21:47:57.981582 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.099819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.100249 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqhvh\" (UniqueName: \"kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.100327 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.203035 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqhvh\" (UniqueName: \"kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.203090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.203696 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.203865 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.204217 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.224102 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqhvh\" (UniqueName: \"kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh\") pod \"community-operators-8jf2j\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.307510 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:47:58 crc kubenswrapper[4754]: I0105 21:47:58.861887 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:47:59 crc kubenswrapper[4754]: I0105 21:47:59.602157 4754 generic.go:334] "Generic (PLEG): container finished" podID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerID="d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad" exitCode=0 Jan 05 21:47:59 crc kubenswrapper[4754]: I0105 21:47:59.605714 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerDied","Data":"d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad"} Jan 05 21:47:59 crc kubenswrapper[4754]: I0105 21:47:59.605760 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerStarted","Data":"e7dd4df770fa892cf6fd57dd1072fea1a440a22e383ee4d6584d3fa75c2f4d45"} Jan 05 21:48:00 crc kubenswrapper[4754]: I0105 21:48:00.617657 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerStarted","Data":"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea"} Jan 05 21:48:01 crc kubenswrapper[4754]: I0105 21:48:01.628990 4754 generic.go:334] "Generic (PLEG): container finished" podID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerID="32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea" exitCode=0 Jan 05 21:48:01 crc kubenswrapper[4754]: I0105 21:48:01.629070 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerDied","Data":"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea"} Jan 05 21:48:02 crc kubenswrapper[4754]: I0105 21:48:02.645060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerStarted","Data":"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa"} Jan 05 21:48:02 crc kubenswrapper[4754]: I0105 21:48:02.678187 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jf2j" podStartSLOduration=3.032012783 podStartE2EDuration="5.678164024s" podCreationTimestamp="2026-01-05 21:47:57 +0000 UTC" firstStartedPulling="2026-01-05 21:47:59.608674767 +0000 UTC m=+6166.317858641" lastFinishedPulling="2026-01-05 21:48:02.254825998 +0000 UTC m=+6168.964009882" observedRunningTime="2026-01-05 21:48:02.670534033 +0000 UTC m=+6169.379717917" watchObservedRunningTime="2026-01-05 21:48:02.678164024 +0000 UTC m=+6169.387347898" Jan 05 21:48:08 crc kubenswrapper[4754]: I0105 21:48:08.308811 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:08 crc kubenswrapper[4754]: I0105 21:48:08.309637 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:08 crc kubenswrapper[4754]: I0105 21:48:08.392845 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:08 crc kubenswrapper[4754]: I0105 21:48:08.773412 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:11 crc kubenswrapper[4754]: I0105 21:48:11.949674 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:48:11 crc kubenswrapper[4754]: I0105 21:48:11.950264 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jf2j" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="registry-server" containerID="cri-o://ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa" gracePeriod=2 Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.573021 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.765271 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities\") pod \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.765345 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqhvh\" (UniqueName: \"kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh\") pod \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.765723 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content\") pod \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\" (UID: \"592a0876-abeb-4b35-a08f-ca5e3e3d8399\") " Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.766419 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities" (OuterVolumeSpecName: "utilities") pod "592a0876-abeb-4b35-a08f-ca5e3e3d8399" (UID: "592a0876-abeb-4b35-a08f-ca5e3e3d8399"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.769403 4754 generic.go:334] "Generic (PLEG): container finished" podID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerID="ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa" exitCode=0 Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.769448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerDied","Data":"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa"} Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.769479 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jf2j" event={"ID":"592a0876-abeb-4b35-a08f-ca5e3e3d8399","Type":"ContainerDied","Data":"e7dd4df770fa892cf6fd57dd1072fea1a440a22e383ee4d6584d3fa75c2f4d45"} Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.769499 4754 scope.go:117] "RemoveContainer" containerID="ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.769665 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jf2j" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.772133 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh" (OuterVolumeSpecName: "kube-api-access-vqhvh") pod "592a0876-abeb-4b35-a08f-ca5e3e3d8399" (UID: "592a0876-abeb-4b35-a08f-ca5e3e3d8399"). InnerVolumeSpecName "kube-api-access-vqhvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.824470 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592a0876-abeb-4b35-a08f-ca5e3e3d8399" (UID: "592a0876-abeb-4b35-a08f-ca5e3e3d8399"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.865288 4754 scope.go:117] "RemoveContainer" containerID="32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.869046 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.869089 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqhvh\" (UniqueName: \"kubernetes.io/projected/592a0876-abeb-4b35-a08f-ca5e3e3d8399-kube-api-access-vqhvh\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.869103 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592a0876-abeb-4b35-a08f-ca5e3e3d8399-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.892428 4754 scope.go:117] "RemoveContainer" containerID="d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.947480 4754 scope.go:117] "RemoveContainer" containerID="ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa" Jan 05 21:48:12 crc kubenswrapper[4754]: E0105 21:48:12.948066 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa\": container with ID starting with ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa not found: ID does not exist" containerID="ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.948113 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa"} err="failed to get container status \"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa\": rpc error: code = NotFound desc = could not find container \"ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa\": container with ID starting with ba523f140ea2f0ec8795223a281aed6f327b009ffe938d93839e426d51d06ffa not found: ID does not exist" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.948140 4754 scope.go:117] "RemoveContainer" containerID="32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea" Jan 05 21:48:12 crc kubenswrapper[4754]: E0105 21:48:12.948816 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea\": container with ID starting with 32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea not found: ID does not exist" containerID="32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.948866 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea"} err="failed to get container status \"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea\": rpc error: code = NotFound desc = could not find container \"32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea\": container with ID starting with 32504a24d54805eabcdc108bec0eb5e6fba19c29fa0c2c3d20844fa452168fea not found: ID does not exist" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.948901 4754 scope.go:117] "RemoveContainer" containerID="d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad" Jan 05 21:48:12 crc kubenswrapper[4754]: E0105 21:48:12.949306 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad\": container with ID starting with d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad not found: ID does not exist" containerID="d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad" Jan 05 21:48:12 crc kubenswrapper[4754]: I0105 21:48:12.949331 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad"} err="failed to get container status \"d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad\": rpc error: code = NotFound desc = could not find container \"d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad\": container with ID starting with d23daea0c2651beb0a7ab7ec779e8fa3cd22e73442b35806aa1d798ed05157ad not found: ID does not exist" Jan 05 21:48:13 crc kubenswrapper[4754]: I0105 21:48:13.159974 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:48:13 crc kubenswrapper[4754]: I0105 21:48:13.169607 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jf2j"] Jan 05 21:48:13 crc kubenswrapper[4754]: I0105 21:48:13.616230 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" path="/var/lib/kubelet/pods/592a0876-abeb-4b35-a08f-ca5e3e3d8399/volumes" Jan 05 21:48:42 crc kubenswrapper[4754]: I0105 21:48:42.147772 4754 generic.go:334] "Generic (PLEG): container finished" podID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" containerID="7e2e5bc60b493750348ec1645e185ddcdae265a2a4ec9bef78123d591aba0048" exitCode=1 Jan 05 21:48:42 crc kubenswrapper[4754]: I0105 21:48:42.147866 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee","Type":"ContainerDied","Data":"7e2e5bc60b493750348ec1645e185ddcdae265a2a4ec9bef78123d591aba0048"} Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.598918 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706438 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706563 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706604 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706750 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706808 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706882 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706930 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lmq8\" (UniqueName: \"kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.706968 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config\") pod \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\" (UID: \"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee\") " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.708908 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.710243 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data" (OuterVolumeSpecName: "config-data") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.717772 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.717988 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.741370 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8" (OuterVolumeSpecName: "kube-api-access-8lmq8") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "kube-api-access-8lmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.746744 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.750279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.758212 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.782986 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" (UID: "bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810151 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810181 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lmq8\" (UniqueName: \"kubernetes.io/projected/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-kube-api-access-8lmq8\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810192 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810204 4754 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810251 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810262 4754 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810271 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810280 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.810300 4754 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.832872 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 05 21:48:43 crc kubenswrapper[4754]: I0105 21:48:43.912265 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:48:44 crc kubenswrapper[4754]: I0105 21:48:44.174957 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee","Type":"ContainerDied","Data":"806b04bd235d28110ecfb905a95d6d82b7fa9edc859ac5ce22d0c353bbf03187"} Jan 05 21:48:44 crc kubenswrapper[4754]: I0105 21:48:44.175005 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806b04bd235d28110ecfb905a95d6d82b7fa9edc859ac5ce22d0c353bbf03187" Jan 05 21:48:44 crc kubenswrapper[4754]: I0105 21:48:44.175071 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.006795 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 21:48:48 crc kubenswrapper[4754]: E0105 21:48:48.008067 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" containerName="tempest-tests-tempest-tests-runner" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008087 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" containerName="tempest-tests-tempest-tests-runner" Jan 05 21:48:48 crc kubenswrapper[4754]: E0105 21:48:48.008121 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="extract-content" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008133 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="extract-content" Jan 05 21:48:48 crc kubenswrapper[4754]: E0105 21:48:48.008165 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="registry-server" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008174 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="registry-server" Jan 05 21:48:48 crc kubenswrapper[4754]: E0105 21:48:48.008190 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="extract-utilities" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008198 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="extract-utilities" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008541 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="592a0876-abeb-4b35-a08f-ca5e3e3d8399" containerName="registry-server" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.008571 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee" containerName="tempest-tests-tempest-tests-runner" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.009826 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.013234 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fj5nz" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.019660 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.121226 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdz47\" (UniqueName: \"kubernetes.io/projected/ce1a328c-cae8-402d-87d5-266db4ce9869-kube-api-access-wdz47\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.121538 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.223650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.223792 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdz47\" (UniqueName: \"kubernetes.io/projected/ce1a328c-cae8-402d-87d5-266db4ce9869-kube-api-access-wdz47\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.224359 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.264913 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdz47\" (UniqueName: \"kubernetes.io/projected/ce1a328c-cae8-402d-87d5-266db4ce9869-kube-api-access-wdz47\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.267045 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ce1a328c-cae8-402d-87d5-266db4ce9869\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.339058 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.840888 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 21:48:48 crc kubenswrapper[4754]: I0105 21:48:48.850336 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:48:49 crc kubenswrapper[4754]: I0105 21:48:49.245929 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ce1a328c-cae8-402d-87d5-266db4ce9869","Type":"ContainerStarted","Data":"990862c4b1e9a987077849cc6adf025eaa84efb8e0eac24b30ed4de04b8776c2"} Jan 05 21:48:50 crc kubenswrapper[4754]: I0105 21:48:50.260829 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ce1a328c-cae8-402d-87d5-266db4ce9869","Type":"ContainerStarted","Data":"84d5ba49b0880c2eb05bcc9dd504cf109fae34adf07fff0aa34427e0e2d6ceb5"} Jan 05 21:48:50 crc kubenswrapper[4754]: I0105 21:48:50.282572 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.225520196 podStartE2EDuration="3.282547921s" podCreationTimestamp="2026-01-05 21:48:47 +0000 UTC" firstStartedPulling="2026-01-05 21:48:48.850138744 +0000 UTC m=+6215.559322618" lastFinishedPulling="2026-01-05 21:48:49.907166469 +0000 UTC m=+6216.616350343" observedRunningTime="2026-01-05 21:48:50.276922303 +0000 UTC m=+6216.986106247" watchObservedRunningTime="2026-01-05 21:48:50.282547921 +0000 UTC m=+6216.991731835" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.259897 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.263534 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.300355 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.412054 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gd8\" (UniqueName: \"kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.412512 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.412710 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.515521 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gd8\" (UniqueName: \"kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.515938 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.516012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.516517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.516554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.537003 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gd8\" (UniqueName: \"kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8\") pod \"redhat-marketplace-9rrrx\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:54 crc kubenswrapper[4754]: I0105 21:48:54.596852 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:48:55 crc kubenswrapper[4754]: I0105 21:48:55.105957 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:48:55 crc kubenswrapper[4754]: I0105 21:48:55.348925 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerStarted","Data":"bc291529efbb9669adbe748fdc8b5a2a179c67d1d918aa6b20d3f9976c6459f9"} Jan 05 21:48:55 crc kubenswrapper[4754]: I0105 21:48:55.348972 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerStarted","Data":"b73732e070b8227e1dd2dcd5dcb8c136bcdca43c5b481d3db61d5bf7ca09d31f"} Jan 05 21:48:56 crc kubenswrapper[4754]: I0105 21:48:56.362845 4754 generic.go:334] "Generic (PLEG): container finished" podID="878fa022-aa87-401e-b630-f9abff1a92bc" containerID="bc291529efbb9669adbe748fdc8b5a2a179c67d1d918aa6b20d3f9976c6459f9" exitCode=0 Jan 05 21:48:56 crc kubenswrapper[4754]: I0105 21:48:56.362950 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerDied","Data":"bc291529efbb9669adbe748fdc8b5a2a179c67d1d918aa6b20d3f9976c6459f9"} Jan 05 21:48:57 crc kubenswrapper[4754]: I0105 21:48:57.379097 4754 generic.go:334] "Generic (PLEG): container finished" podID="878fa022-aa87-401e-b630-f9abff1a92bc" containerID="cd9534f752bbbae07d725b6747115d7b37d833dd4f4e99c23cf87b907153533c" exitCode=0 Jan 05 21:48:57 crc kubenswrapper[4754]: I0105 21:48:57.379153 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerDied","Data":"cd9534f752bbbae07d725b6747115d7b37d833dd4f4e99c23cf87b907153533c"} Jan 05 21:48:58 crc kubenswrapper[4754]: I0105 21:48:58.399276 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerStarted","Data":"448d5ff6505169dd167207443719d4d8049fe080b056d3269d41305bfef08b31"} Jan 05 21:48:58 crc kubenswrapper[4754]: I0105 21:48:58.431572 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9rrrx" podStartSLOduration=1.8442456329999999 podStartE2EDuration="4.431548865s" podCreationTimestamp="2026-01-05 21:48:54 +0000 UTC" firstStartedPulling="2026-01-05 21:48:55.35116996 +0000 UTC m=+6222.060353834" lastFinishedPulling="2026-01-05 21:48:57.938473192 +0000 UTC m=+6224.647657066" observedRunningTime="2026-01-05 21:48:58.419078697 +0000 UTC m=+6225.128262581" watchObservedRunningTime="2026-01-05 21:48:58.431548865 +0000 UTC m=+6225.140732749" Jan 05 21:49:04 crc kubenswrapper[4754]: I0105 21:49:04.597701 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:04 crc kubenswrapper[4754]: I0105 21:49:04.598278 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:04 crc kubenswrapper[4754]: I0105 21:49:04.664955 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:05 crc kubenswrapper[4754]: I0105 21:49:05.539169 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:05 crc kubenswrapper[4754]: I0105 21:49:05.619020 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:49:07 crc kubenswrapper[4754]: I0105 21:49:07.506545 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9rrrx" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="registry-server" containerID="cri-o://448d5ff6505169dd167207443719d4d8049fe080b056d3269d41305bfef08b31" gracePeriod=2 Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.520458 4754 generic.go:334] "Generic (PLEG): container finished" podID="878fa022-aa87-401e-b630-f9abff1a92bc" containerID="448d5ff6505169dd167207443719d4d8049fe080b056d3269d41305bfef08b31" exitCode=0 Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.520541 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerDied","Data":"448d5ff6505169dd167207443719d4d8049fe080b056d3269d41305bfef08b31"} Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.520828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rrrx" event={"ID":"878fa022-aa87-401e-b630-f9abff1a92bc","Type":"ContainerDied","Data":"b73732e070b8227e1dd2dcd5dcb8c136bcdca43c5b481d3db61d5bf7ca09d31f"} Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.520845 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73732e070b8227e1dd2dcd5dcb8c136bcdca43c5b481d3db61d5bf7ca09d31f" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.546671 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.690035 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content\") pod \"878fa022-aa87-401e-b630-f9abff1a92bc\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.690327 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities\") pod \"878fa022-aa87-401e-b630-f9abff1a92bc\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.690402 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89gd8\" (UniqueName: \"kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8\") pod \"878fa022-aa87-401e-b630-f9abff1a92bc\" (UID: \"878fa022-aa87-401e-b630-f9abff1a92bc\") " Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.691552 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities" (OuterVolumeSpecName: "utilities") pod "878fa022-aa87-401e-b630-f9abff1a92bc" (UID: "878fa022-aa87-401e-b630-f9abff1a92bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.697395 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8" (OuterVolumeSpecName: "kube-api-access-89gd8") pod "878fa022-aa87-401e-b630-f9abff1a92bc" (UID: "878fa022-aa87-401e-b630-f9abff1a92bc"). InnerVolumeSpecName "kube-api-access-89gd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.722651 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "878fa022-aa87-401e-b630-f9abff1a92bc" (UID: "878fa022-aa87-401e-b630-f9abff1a92bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.793057 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.793374 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89gd8\" (UniqueName: \"kubernetes.io/projected/878fa022-aa87-401e-b630-f9abff1a92bc-kube-api-access-89gd8\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:08 crc kubenswrapper[4754]: I0105 21:49:08.793400 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878fa022-aa87-401e-b630-f9abff1a92bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:09 crc kubenswrapper[4754]: I0105 21:49:09.529016 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rrrx" Jan 05 21:49:09 crc kubenswrapper[4754]: I0105 21:49:09.564772 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:49:09 crc kubenswrapper[4754]: I0105 21:49:09.576972 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rrrx"] Jan 05 21:49:09 crc kubenswrapper[4754]: I0105 21:49:09.601334 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" path="/var/lib/kubelet/pods/878fa022-aa87-401e-b630-f9abff1a92bc/volumes" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.227742 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zklnc/must-gather-zbf9q"] Jan 05 21:49:32 crc kubenswrapper[4754]: E0105 21:49:32.231503 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="registry-server" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.231527 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="registry-server" Jan 05 21:49:32 crc kubenswrapper[4754]: E0105 21:49:32.231559 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="extract-utilities" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.231566 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="extract-utilities" Jan 05 21:49:32 crc kubenswrapper[4754]: E0105 21:49:32.231620 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="extract-content" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.231627 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="extract-content" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.231839 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="878fa022-aa87-401e-b630-f9abff1a92bc" containerName="registry-server" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.233004 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.235807 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zklnc"/"openshift-service-ca.crt" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.235904 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zklnc"/"kube-root-ca.crt" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.236372 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zklnc"/"default-dockercfg-8ztdx" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.242089 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zklnc/must-gather-zbf9q"] Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.289580 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.289659 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskbt\" (UniqueName: \"kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.392388 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.392511 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bskbt\" (UniqueName: \"kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.393517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.414651 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bskbt\" (UniqueName: \"kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt\") pod \"must-gather-zbf9q\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:32 crc kubenswrapper[4754]: I0105 21:49:32.551996 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:49:33 crc kubenswrapper[4754]: I0105 21:49:33.033179 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zklnc/must-gather-zbf9q"] Jan 05 21:49:33 crc kubenswrapper[4754]: I0105 21:49:33.835706 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/must-gather-zbf9q" event={"ID":"a10baf0b-ad50-44a7-8a8c-f5efddadb26e","Type":"ContainerStarted","Data":"604ce6084b3f3ad34a909b1566e5de59245dae77318ca251508d24abe0628639"} Jan 05 21:49:40 crc kubenswrapper[4754]: I0105 21:49:40.924063 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/must-gather-zbf9q" event={"ID":"a10baf0b-ad50-44a7-8a8c-f5efddadb26e","Type":"ContainerStarted","Data":"2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721"} Jan 05 21:49:40 crc kubenswrapper[4754]: I0105 21:49:40.924674 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/must-gather-zbf9q" event={"ID":"a10baf0b-ad50-44a7-8a8c-f5efddadb26e","Type":"ContainerStarted","Data":"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9"} Jan 05 21:49:44 crc kubenswrapper[4754]: I0105 21:49:44.798672 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zklnc/must-gather-zbf9q" podStartSLOduration=5.796762794 podStartE2EDuration="12.798651328s" podCreationTimestamp="2026-01-05 21:49:32 +0000 UTC" firstStartedPulling="2026-01-05 21:49:33.034040116 +0000 UTC m=+6259.743223990" lastFinishedPulling="2026-01-05 21:49:40.03592865 +0000 UTC m=+6266.745112524" observedRunningTime="2026-01-05 21:49:40.950087671 +0000 UTC m=+6267.659271545" watchObservedRunningTime="2026-01-05 21:49:44.798651328 +0000 UTC m=+6271.507835202" Jan 05 21:49:44 crc kubenswrapper[4754]: I0105 21:49:44.812742 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zklnc/crc-debug-69hkt"] Jan 05 21:49:44 crc kubenswrapper[4754]: I0105 21:49:44.814615 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:44 crc kubenswrapper[4754]: I0105 21:49:44.902384 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfn4\" (UniqueName: \"kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:44 crc kubenswrapper[4754]: I0105 21:49:44.902454 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.004776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfn4\" (UniqueName: \"kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.005135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.005496 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.031368 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfn4\" (UniqueName: \"kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4\") pod \"crc-debug-69hkt\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.137261 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:49:45 crc kubenswrapper[4754]: W0105 21:49:45.179450 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9dbe64_a0bb_4f6f_9fbb_6eec9a25c17e.slice/crio-47555b355b319b08d93f68ba8574a01cb2f8780244409bc26984b3341290a2a5 WatchSource:0}: Error finding container 47555b355b319b08d93f68ba8574a01cb2f8780244409bc26984b3341290a2a5: Status 404 returned error can't find the container with id 47555b355b319b08d93f68ba8574a01cb2f8780244409bc26984b3341290a2a5 Jan 05 21:49:45 crc kubenswrapper[4754]: I0105 21:49:45.993109 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-69hkt" event={"ID":"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e","Type":"ContainerStarted","Data":"47555b355b319b08d93f68ba8574a01cb2f8780244409bc26984b3341290a2a5"} Jan 05 21:49:48 crc kubenswrapper[4754]: I0105 21:49:48.108767 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:49:48 crc kubenswrapper[4754]: I0105 21:49:48.109023 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:49:57 crc kubenswrapper[4754]: I0105 21:49:57.116716 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-69hkt" event={"ID":"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e","Type":"ContainerStarted","Data":"87adc1c0e683eb91842260f1c3c4aa6a313498401408b8305b6ecdba5106295f"} Jan 05 21:49:57 crc kubenswrapper[4754]: I0105 21:49:57.136545 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zklnc/crc-debug-69hkt" podStartSLOduration=1.695104642 podStartE2EDuration="13.136526927s" podCreationTimestamp="2026-01-05 21:49:44 +0000 UTC" firstStartedPulling="2026-01-05 21:49:45.182257978 +0000 UTC m=+6271.891441852" lastFinishedPulling="2026-01-05 21:49:56.623680263 +0000 UTC m=+6283.332864137" observedRunningTime="2026-01-05 21:49:57.128789803 +0000 UTC m=+6283.837973677" watchObservedRunningTime="2026-01-05 21:49:57.136526927 +0000 UTC m=+6283.845710801" Jan 05 21:50:18 crc kubenswrapper[4754]: I0105 21:50:18.109756 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:50:18 crc kubenswrapper[4754]: I0105 21:50:18.110315 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:50:46 crc kubenswrapper[4754]: I0105 21:50:46.724922 4754 generic.go:334] "Generic (PLEG): container finished" podID="bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" containerID="87adc1c0e683eb91842260f1c3c4aa6a313498401408b8305b6ecdba5106295f" exitCode=0 Jan 05 21:50:46 crc kubenswrapper[4754]: I0105 21:50:46.725007 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-69hkt" event={"ID":"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e","Type":"ContainerDied","Data":"87adc1c0e683eb91842260f1c3c4aa6a313498401408b8305b6ecdba5106295f"} Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.859986 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.934689 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-69hkt"] Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.949307 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-69hkt"] Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.980432 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host\") pod \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.980715 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmfn4\" (UniqueName: \"kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4\") pod \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\" (UID: \"bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e\") " Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.981802 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host" (OuterVolumeSpecName: "host") pod "bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" (UID: "bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:47 crc kubenswrapper[4754]: I0105 21:50:47.993133 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4" (OuterVolumeSpecName: "kube-api-access-tmfn4") pod "bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" (UID: "bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e"). InnerVolumeSpecName "kube-api-access-tmfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.083019 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmfn4\" (UniqueName: \"kubernetes.io/projected/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-kube-api-access-tmfn4\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.083060 4754 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e-host\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.108839 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.108902 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.108949 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.109605 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.109697 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" gracePeriod=600 Jan 05 21:50:48 crc kubenswrapper[4754]: E0105 21:50:48.244626 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:50:48 crc kubenswrapper[4754]: E0105 21:50:48.253900 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:50:48 crc kubenswrapper[4754]: E0105 21:50:48.254038 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce145f2_f010_4086_963c_23e68ff9e280.slice/crio-conmon-169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.755691 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47555b355b319b08d93f68ba8574a01cb2f8780244409bc26984b3341290a2a5" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.755712 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-69hkt" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.769089 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" exitCode=0 Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.769188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6"} Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.769240 4754 scope.go:117] "RemoveContainer" containerID="1e96f75a75ab896798bd0ce34ed78c0e5c67565c086a5db1ff1ef9be65ab6d44" Jan 05 21:50:48 crc kubenswrapper[4754]: I0105 21:50:48.770138 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:50:48 crc kubenswrapper[4754]: E0105 21:50:48.770526 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.213151 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zklnc/crc-debug-lr5h6"] Jan 05 21:50:49 crc kubenswrapper[4754]: E0105 21:50:49.213802 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" containerName="container-00" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.213818 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" containerName="container-00" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.214111 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" containerName="container-00" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.215129 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.308219 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqqb\" (UniqueName: \"kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.310596 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.413604 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.413778 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.414179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqqb\" (UniqueName: \"kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.433275 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqqb\" (UniqueName: \"kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb\") pod \"crc-debug-lr5h6\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.531639 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.604078 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e" path="/var/lib/kubelet/pods/bb9dbe64-a0bb-4f6f-9fbb-6eec9a25c17e/volumes" Jan 05 21:50:49 crc kubenswrapper[4754]: I0105 21:50:49.783148 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" event={"ID":"f47d9990-a0be-4ec8-99f6-25cf34d1c347","Type":"ContainerStarted","Data":"1c0efd559f55153a0b8be3f46a47972aebdbd052923a7d229922a77364ef6547"} Jan 05 21:50:50 crc kubenswrapper[4754]: I0105 21:50:50.798970 4754 generic.go:334] "Generic (PLEG): container finished" podID="f47d9990-a0be-4ec8-99f6-25cf34d1c347" containerID="8b46fccf1c324559a3f26204cf769b6872c3c6dfe6edbab88b673b1d3d9c3eda" exitCode=0 Jan 05 21:50:50 crc kubenswrapper[4754]: I0105 21:50:50.799085 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" event={"ID":"f47d9990-a0be-4ec8-99f6-25cf34d1c347","Type":"ContainerDied","Data":"8b46fccf1c324559a3f26204cf769b6872c3c6dfe6edbab88b673b1d3d9c3eda"} Jan 05 21:50:51 crc kubenswrapper[4754]: I0105 21:50:51.939709 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.076454 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqqb\" (UniqueName: \"kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb\") pod \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.076821 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host\") pod \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\" (UID: \"f47d9990-a0be-4ec8-99f6-25cf34d1c347\") " Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.076950 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host" (OuterVolumeSpecName: "host") pod "f47d9990-a0be-4ec8-99f6-25cf34d1c347" (UID: "f47d9990-a0be-4ec8-99f6-25cf34d1c347"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.077582 4754 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f47d9990-a0be-4ec8-99f6-25cf34d1c347-host\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.093608 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb" (OuterVolumeSpecName: "kube-api-access-nhqqb") pod "f47d9990-a0be-4ec8-99f6-25cf34d1c347" (UID: "f47d9990-a0be-4ec8-99f6-25cf34d1c347"). InnerVolumeSpecName "kube-api-access-nhqqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.179606 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhqqb\" (UniqueName: \"kubernetes.io/projected/f47d9990-a0be-4ec8-99f6-25cf34d1c347-kube-api-access-nhqqb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.824619 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" event={"ID":"f47d9990-a0be-4ec8-99f6-25cf34d1c347","Type":"ContainerDied","Data":"1c0efd559f55153a0b8be3f46a47972aebdbd052923a7d229922a77364ef6547"} Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.824964 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0efd559f55153a0b8be3f46a47972aebdbd052923a7d229922a77364ef6547" Jan 05 21:50:52 crc kubenswrapper[4754]: I0105 21:50:52.824672 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-lr5h6" Jan 05 21:50:53 crc kubenswrapper[4754]: I0105 21:50:53.200494 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-lr5h6"] Jan 05 21:50:53 crc kubenswrapper[4754]: I0105 21:50:53.211313 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-lr5h6"] Jan 05 21:50:53 crc kubenswrapper[4754]: I0105 21:50:53.619099 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47d9990-a0be-4ec8-99f6-25cf34d1c347" path="/var/lib/kubelet/pods/f47d9990-a0be-4ec8-99f6-25cf34d1c347/volumes" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.383263 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zklnc/crc-debug-t4tfb"] Jan 05 21:50:54 crc kubenswrapper[4754]: E0105 21:50:54.383952 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47d9990-a0be-4ec8-99f6-25cf34d1c347" containerName="container-00" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.384316 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47d9990-a0be-4ec8-99f6-25cf34d1c347" containerName="container-00" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.384621 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47d9990-a0be-4ec8-99f6-25cf34d1c347" containerName="container-00" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.385740 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.450987 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2nl\" (UniqueName: \"kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.451240 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.553501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.553593 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2nl\" (UniqueName: \"kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.553645 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.577966 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2nl\" (UniqueName: \"kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl\") pod \"crc-debug-t4tfb\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.709011 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:54 crc kubenswrapper[4754]: W0105 21:50:54.740261 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacc85475_e970_4731_aa8f_4b9d735f06cc.slice/crio-f2dbce0df8f6b3b981bd3415168ad04642de9be69a4d172c5dc13d8f95e82e25 WatchSource:0}: Error finding container f2dbce0df8f6b3b981bd3415168ad04642de9be69a4d172c5dc13d8f95e82e25: Status 404 returned error can't find the container with id f2dbce0df8f6b3b981bd3415168ad04642de9be69a4d172c5dc13d8f95e82e25 Jan 05 21:50:54 crc kubenswrapper[4754]: I0105 21:50:54.868109 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" event={"ID":"acc85475-e970-4731-aa8f-4b9d735f06cc","Type":"ContainerStarted","Data":"f2dbce0df8f6b3b981bd3415168ad04642de9be69a4d172c5dc13d8f95e82e25"} Jan 05 21:50:55 crc kubenswrapper[4754]: I0105 21:50:55.880175 4754 generic.go:334] "Generic (PLEG): container finished" podID="acc85475-e970-4731-aa8f-4b9d735f06cc" containerID="0b5d083b3f533fae61afe55e3333b37f883b107274a8d6f7557e23350ab88bf0" exitCode=0 Jan 05 21:50:55 crc kubenswrapper[4754]: I0105 21:50:55.880244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" event={"ID":"acc85475-e970-4731-aa8f-4b9d735f06cc","Type":"ContainerDied","Data":"0b5d083b3f533fae61afe55e3333b37f883b107274a8d6f7557e23350ab88bf0"} Jan 05 21:50:55 crc kubenswrapper[4754]: I0105 21:50:55.927601 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-t4tfb"] Jan 05 21:50:55 crc kubenswrapper[4754]: I0105 21:50:55.940611 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zklnc/crc-debug-t4tfb"] Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.053677 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.225338 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx2nl\" (UniqueName: \"kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl\") pod \"acc85475-e970-4731-aa8f-4b9d735f06cc\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.225811 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host\") pod \"acc85475-e970-4731-aa8f-4b9d735f06cc\" (UID: \"acc85475-e970-4731-aa8f-4b9d735f06cc\") " Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.225926 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host" (OuterVolumeSpecName: "host") pod "acc85475-e970-4731-aa8f-4b9d735f06cc" (UID: "acc85475-e970-4731-aa8f-4b9d735f06cc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.226544 4754 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acc85475-e970-4731-aa8f-4b9d735f06cc-host\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.230533 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl" (OuterVolumeSpecName: "kube-api-access-zx2nl") pod "acc85475-e970-4731-aa8f-4b9d735f06cc" (UID: "acc85475-e970-4731-aa8f-4b9d735f06cc"). InnerVolumeSpecName "kube-api-access-zx2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.329047 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx2nl\" (UniqueName: \"kubernetes.io/projected/acc85475-e970-4731-aa8f-4b9d735f06cc-kube-api-access-zx2nl\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.601459 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc85475-e970-4731-aa8f-4b9d735f06cc" path="/var/lib/kubelet/pods/acc85475-e970-4731-aa8f-4b9d735f06cc/volumes" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.939813 4754 scope.go:117] "RemoveContainer" containerID="0b5d083b3f533fae61afe55e3333b37f883b107274a8d6f7557e23350ab88bf0" Jan 05 21:50:57 crc kubenswrapper[4754]: I0105 21:50:57.940103 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/crc-debug-t4tfb" Jan 05 21:51:00 crc kubenswrapper[4754]: I0105 21:51:00.588957 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:51:00 crc kubenswrapper[4754]: E0105 21:51:00.590483 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:51:12 crc kubenswrapper[4754]: I0105 21:51:12.589694 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:51:12 crc kubenswrapper[4754]: E0105 21:51:12.590486 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:51:22 crc kubenswrapper[4754]: I0105 21:51:22.999351 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_959170c4-f107-47da-baf8-7e3e49084424/aodh-api/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.156343 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_959170c4-f107-47da-baf8-7e3e49084424/aodh-evaluator/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.418112 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_959170c4-f107-47da-baf8-7e3e49084424/aodh-notifier/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.457889 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_959170c4-f107-47da-baf8-7e3e49084424/aodh-listener/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.558926 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbd467bc4-dt25q_2516e3b9-fbb3-4341-ba30-837bc79225aa/barbican-api/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.669688 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbd467bc4-dt25q_2516e3b9-fbb3-4341-ba30-837bc79225aa/barbican-api-log/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.859086 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-9ddd6f8d6-rrvh5_39808c45-50a7-4712-92b4-8e962f2672d1/barbican-keystone-listener/0.log" Jan 05 21:51:23 crc kubenswrapper[4754]: I0105 21:51:23.901489 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-9ddd6f8d6-rrvh5_39808c45-50a7-4712-92b4-8e962f2672d1/barbican-keystone-listener-log/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.006077 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665fb988d9-9wrmb_2f3094bb-a30e-433f-b167-9a753260191a/barbican-worker/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.078713 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665fb988d9-9wrmb_2f3094bb-a30e-433f-b167-9a753260191a/barbican-worker-log/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.205676 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4rrjw_8c0bfa36-b48b-4bba-b358-22c0a1001f5b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.338640 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/ceilometer-central-agent/1.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.475520 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/ceilometer-notification-agent/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.477927 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/ceilometer-central-agent/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.539073 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/proxy-httpd/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.588019 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_68c442e4-0c24-4351-84b7-ccda8b09ea2c/sg-core/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.588187 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:51:24 crc kubenswrapper[4754]: E0105 21:51:24.588576 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.733191 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34d9b453-e1eb-49a6-883e-690d792a9922/cinder-api/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.768055 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_34d9b453-e1eb-49a6-883e-690d792a9922/cinder-api-log/0.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.924965 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2d71b2d3-db78-4e52-af70-e5108d39502b/cinder-scheduler/1.log" Jan 05 21:51:24 crc kubenswrapper[4754]: I0105 21:51:24.969672 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2d71b2d3-db78-4e52-af70-e5108d39502b/cinder-scheduler/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.049667 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2d71b2d3-db78-4e52-af70-e5108d39502b/probe/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.202957 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ngbz6_3647d334-c168-474a-aabd-b8d4f6461466/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.283882 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-927hg_127f9233-b76e-47f9-bf65-be5a29ed4c79/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.420201 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-q9vv9_76d33f28-9ba4-443d-b136-bd3458c56d95/init/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.663184 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ld4k7_74141d35-f30c-469c-a926-7a5274ca536d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.714438 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-q9vv9_76d33f28-9ba4-443d-b136-bd3458c56d95/init/0.log" Jan 05 21:51:25 crc kubenswrapper[4754]: I0105 21:51:25.787170 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-q9vv9_76d33f28-9ba4-443d-b136-bd3458c56d95/dnsmasq-dns/0.log" Jan 05 21:51:26 crc kubenswrapper[4754]: I0105 21:51:26.000124 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff2c492b-de99-47c6-8cad-42a1427039f1/glance-httpd/0.log" Jan 05 21:51:26 crc kubenswrapper[4754]: I0105 21:51:26.024444 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff2c492b-de99-47c6-8cad-42a1427039f1/glance-log/0.log" Jan 05 21:51:26 crc kubenswrapper[4754]: I0105 21:51:26.226465 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_651e1d87-a791-4aab-92b8-68aae7da2a91/glance-httpd/0.log" Jan 05 21:51:26 crc kubenswrapper[4754]: I0105 21:51:26.306571 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_651e1d87-a791-4aab-92b8-68aae7da2a91/glance-log/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.113717 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7c9b6fddd9-fmxb7_9b7b8c52-175b-4a45-8c1a-4dc90f73be4e/heat-engine/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.181556 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-757fc694bb-c2pvk_2a77aee6-6114-4660-88f7-d4e86ea88421/heat-api/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.189627 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-stjtn_b2149289-5801-4a15-a8b8-4aedc5dd32ed/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.307521 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-59c56bf-mxnq6_ab7a9fe0-e239-4353-8cc8-90199777fb33/heat-cfnapi/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.692353 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zhgpk_97b90149-7943-405c-ae5c-039a890b61a7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.786560 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29460781-ln2tm_7478b4e9-bbb7-4f8b-b0eb-6ebbb518ee69/keystone-cron/0.log" Jan 05 21:51:27 crc kubenswrapper[4754]: I0105 21:51:27.977071 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ffdd4590-0498-4083-996d-75035d8fba10/kube-state-metrics/0.log" Jan 05 21:51:28 crc kubenswrapper[4754]: I0105 21:51:28.270031 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-j7ddw_a4f1c57e-be1c-4790-a806-2e23b4324280/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:28 crc kubenswrapper[4754]: I0105 21:51:28.345459 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-kpmfj_09367f79-31eb-4f9f-a630-fb1dbbfd4e39/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:28 crc kubenswrapper[4754]: I0105 21:51:28.459341 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67844b756c-gvtvn_b8d0d587-cbf6-4c67-9d08-297250f6c5e5/keystone-api/0.log" Jan 05 21:51:28 crc kubenswrapper[4754]: I0105 21:51:28.604603 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_8ca22987-f7b1-429f-bc45-9eac9f58a85d/mysqld-exporter/0.log" Jan 05 21:51:28 crc kubenswrapper[4754]: I0105 21:51:28.949768 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dc856cfc-xjgrw_6903533f-d965-4b1f-81b1-630dda816dbc/neutron-httpd/0.log" Jan 05 21:51:29 crc kubenswrapper[4754]: I0105 21:51:29.008338 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvt4b_429c47f2-c321-450e-a0ce-2fc72a26f9e3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:29 crc kubenswrapper[4754]: I0105 21:51:29.012881 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dc856cfc-xjgrw_6903533f-d965-4b1f-81b1-630dda816dbc/neutron-api/0.log" Jan 05 21:51:29 crc kubenswrapper[4754]: I0105 21:51:29.647382 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0c8756e1-7250-4b46-9386-f4fb516eed62/nova-cell0-conductor-conductor/0.log" Jan 05 21:51:29 crc kubenswrapper[4754]: I0105 21:51:29.943213 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82b1f54b-5485-44ad-9d23-ba1243ea1281/nova-api-log/0.log" Jan 05 21:51:29 crc kubenswrapper[4754]: I0105 21:51:29.958799 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6f5143f1-98d6-499b-88e3-4d256176784f/nova-cell1-conductor-conductor/0.log" Jan 05 21:51:30 crc kubenswrapper[4754]: I0105 21:51:30.197369 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a8a5dad7-9efb-44e3-b042-c0ba996ee955/nova-cell1-novncproxy-novncproxy/0.log" Jan 05 21:51:30 crc kubenswrapper[4754]: I0105 21:51:30.219219 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_82b1f54b-5485-44ad-9d23-ba1243ea1281/nova-api-api/0.log" Jan 05 21:51:30 crc kubenswrapper[4754]: I0105 21:51:30.285330 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cbc6l_ed63fc99-0c8f-47ce-8d5d-71b98edf3a25/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:30 crc kubenswrapper[4754]: I0105 21:51:30.554410 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ea36e99-8156-4df4-ac09-0e12ab02f0a4/nova-metadata-log/0.log" Jan 05 21:51:30 crc kubenswrapper[4754]: I0105 21:51:30.792039 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81aaed05-ed65-4414-bf4f-7e5e4cf9966a/mysql-bootstrap/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.023541 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b342b4dd-f705-4e66-a97c-231b627cb420/nova-scheduler-scheduler/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.025658 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81aaed05-ed65-4414-bf4f-7e5e4cf9966a/mysql-bootstrap/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.054828 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81aaed05-ed65-4414-bf4f-7e5e4cf9966a/galera/1.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.273954 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81aaed05-ed65-4414-bf4f-7e5e4cf9966a/galera/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.672509 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9af784f4-79c9-4422-bc62-a2c49c9bb7cc/mysql-bootstrap/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.837518 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9af784f4-79c9-4422-bc62-a2c49c9bb7cc/mysql-bootstrap/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.901131 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9af784f4-79c9-4422-bc62-a2c49c9bb7cc/galera/0.log" Jan 05 21:51:31 crc kubenswrapper[4754]: I0105 21:51:31.903111 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9af784f4-79c9-4422-bc62-a2c49c9bb7cc/galera/1.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.123951 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_91975736-d23e-479b-bd43-68b9b1b3e450/openstackclient/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.278413 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dmwqp_d1de9457-1e7b-4c70-99f7-1214589d91d9/openstack-network-exporter/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.427617 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-psnm7_c33325d3-6a5d-4d13-b2c6-ae62a01904df/ovsdb-server-init/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.661823 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-psnm7_c33325d3-6a5d-4d13-b2c6-ae62a01904df/ovs-vswitchd/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.682002 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-psnm7_c33325d3-6a5d-4d13-b2c6-ae62a01904df/ovsdb-server-init/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.685406 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-psnm7_c33325d3-6a5d-4d13-b2c6-ae62a01904df/ovsdb-server/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.867911 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ea36e99-8156-4df4-ac09-0e12ab02f0a4/nova-metadata-metadata/0.log" Jan 05 21:51:32 crc kubenswrapper[4754]: I0105 21:51:32.925632 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pbz4n_5212dab3-1e5c-48b6-a710-f3551ab2ceaf/ovn-controller/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.144046 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cpvn7_aec60a10-f518-45a1-8b13-375943e8ff65/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.189495 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd134f60-e97c-487b-be9e-c356c7478c21/ovn-northd/1.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.204030 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd134f60-e97c-487b-be9e-c356c7478c21/openstack-network-exporter/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.377641 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd134f60-e97c-487b-be9e-c356c7478c21/ovn-northd/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.441589 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43b2550d-6f62-42ff-8b14-20d95a9a4652/ovsdbserver-nb/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.463728 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43b2550d-6f62-42ff-8b14-20d95a9a4652/openstack-network-exporter/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.666227 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4f6ded3-ca17-4343-a4ee-15df3c64d1c0/openstack-network-exporter/0.log" Jan 05 21:51:33 crc kubenswrapper[4754]: I0105 21:51:33.728385 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4f6ded3-ca17-4343-a4ee-15df3c64d1c0/ovsdbserver-sb/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.003929 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_652366fc-9032-455e-9e13-b71fd3ff76e3/init-config-reloader/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.064025 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffdc87456-jd4rc_19c37996-a1c0-45b2-9ca0-0af72f831909/placement-log/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.137430 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffdc87456-jd4rc_19c37996-a1c0-45b2-9ca0-0af72f831909/placement-api/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.164477 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_652366fc-9032-455e-9e13-b71fd3ff76e3/init-config-reloader/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.224671 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_652366fc-9032-455e-9e13-b71fd3ff76e3/config-reloader/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.305465 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_652366fc-9032-455e-9e13-b71fd3ff76e3/prometheus/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.341267 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_652366fc-9032-455e-9e13-b71fd3ff76e3/thanos-sidecar/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.473697 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ad41aacf-9d0a-42e2-b3cf-de51001540e2/setup-container/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.642531 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ad41aacf-9d0a-42e2-b3cf-de51001540e2/setup-container/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.660321 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ad41aacf-9d0a-42e2-b3cf-de51001540e2/rabbitmq/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.692336 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d607fcf-186e-407f-b95a-5a6b9ecad255/setup-container/0.log" Jan 05 21:51:34 crc kubenswrapper[4754]: I0105 21:51:34.918377 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d607fcf-186e-407f-b95a-5a6b9ecad255/setup-container/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.034066 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d607fcf-186e-407f-b95a-5a6b9ecad255/rabbitmq/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.074615 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4fb3d0a2-68b4-4224-8eeb-c9113f079684/setup-container/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.312608 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4fb3d0a2-68b4-4224-8eeb-c9113f079684/setup-container/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.357401 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_cca29317-4bf0-40f2-aeff-1cbb68fb9cd2/setup-container/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.381179 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4fb3d0a2-68b4-4224-8eeb-c9113f079684/rabbitmq/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.551519 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_cca29317-4bf0-40f2-aeff-1cbb68fb9cd2/setup-container/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.615341 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zwtrj_fe969a28-4f43-4936-9859-b73513ec8a50/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:35 crc kubenswrapper[4754]: I0105 21:51:35.636694 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_cca29317-4bf0-40f2-aeff-1cbb68fb9cd2/rabbitmq/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.038271 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xq9xn_f49e079e-8128-4d5c-843d-54c6d12df620/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.129703 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fmncl_fffef548-67ef-4afd-a644-9aaad154e735/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.310848 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-k6k5r_f16a69e1-0859-46f4-97f9-a825b629df09/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.394382 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vpqzr_1b846782-fabe-45fc-94d9-f2d0f5a2c008/ssh-known-hosts-edpm-deployment/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.680608 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6ff666bbf9-t252x_2ce48436-7086-4501-9b9d-952b965fb028/proxy-server/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.774105 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-l252x_2dedf888-afb9-42e6-80fa-3135a67787db/swift-ring-rebalance/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.873979 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/account-auditor/0.log" Jan 05 21:51:36 crc kubenswrapper[4754]: I0105 21:51:36.881130 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6ff666bbf9-t252x_2ce48436-7086-4501-9b9d-952b965fb028/proxy-httpd/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.032684 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/account-reaper/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.104413 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/account-server/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.176742 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/account-replicator/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.202393 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/container-auditor/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.321144 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/container-replicator/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.345751 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/container-server/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.444075 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/object-auditor/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.444344 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/container-updater/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.566708 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/object-expirer/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.621564 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/object-replicator/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.645753 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d0a4f96f-678d-42ec-8302-6a27a4477941/memcached/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.673072 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/object-server/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.684489 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/object-updater/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.751072 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/rsync/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.800652 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_64fe8f1d-0c69-4fc8-aac8-c17660c2fed5/swift-recon-cron/0.log" Jan 05 21:51:37 crc kubenswrapper[4754]: I0105 21:51:37.886763 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qtv47_a91d7133-4aa5-4931-9311-d3d7ccdd3322/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:38 crc kubenswrapper[4754]: I0105 21:51:38.008775 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-bnq8j_e76d36bb-8a03-46b6-82b1-1bcbdcbbda25/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:38 crc kubenswrapper[4754]: I0105 21:51:38.202121 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ce1a328c-cae8-402d-87d5-266db4ce9869/test-operator-logs-container/0.log" Jan 05 21:51:38 crc kubenswrapper[4754]: I0105 21:51:38.286569 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5qmpw_c70de0fb-71a9-4970-a485-a3a1f3c18868/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 21:51:38 crc kubenswrapper[4754]: I0105 21:51:38.884315 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bc0b8e6e-ce93-4afc-96ec-500a8dbce0ee/tempest-tests-tempest-tests-runner/0.log" Jan 05 21:51:39 crc kubenswrapper[4754]: I0105 21:51:39.589168 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:51:39 crc kubenswrapper[4754]: E0105 21:51:39.589690 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:51:52 crc kubenswrapper[4754]: I0105 21:51:52.589229 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:51:52 crc kubenswrapper[4754]: E0105 21:51:52.590075 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:52:02 crc kubenswrapper[4754]: I0105 21:52:02.619420 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-8t5gl_83a7e6b7-db24-4f2f-988d-ed13a27a06af/manager/0.log" Jan 05 21:52:02 crc kubenswrapper[4754]: I0105 21:52:02.760044 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/util/0.log" Jan 05 21:52:02 crc kubenswrapper[4754]: I0105 21:52:02.987470 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/util/0.log" Jan 05 21:52:02 crc kubenswrapper[4754]: I0105 21:52:02.991424 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/pull/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.019193 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/pull/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.157173 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/pull/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.189401 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/util/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.234897 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cc3da999499718a40e2a8716609c4cea3f8ba07e0fdecd043d9caa917cffxs7_a3717de5-c997-42e5-85ad-ed9de666f879/extract/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.393651 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-h2wmj_2aeeabff-cc4c-49b1-a895-c21ae9d43e3d/manager/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.479161 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-dqks7_f289c3c4-ad02-4022-ac22-239133f6c1ca/manager/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.590669 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-lrhk6_92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f/manager/1.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.793106 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-lrhk6_92928a21-7bbe-44b9-9d2b-2fcce8d0dd1f/manager/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.825517 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-7b945_f1a3a024-3293-4e7b-b1cd-c93c914c190e/manager/0.log" Jan 05 21:52:03 crc kubenswrapper[4754]: I0105 21:52:03.948530 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-6lf69_1f664632-a6e1-491d-b0cf-be1717a6d28b/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.017421 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-2g75d_736d23ce-6bc0-439b-b1ff-86aad6363c2a/manager/1.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.217020 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-brnwf_dd93e799-6591-41d5-988a-18cc6d8c836d/manager/1.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.249805 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-brnwf_dd93e799-6591-41d5-988a-18cc6d8c836d/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.376656 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-2g75d_736d23ce-6bc0-439b-b1ff-86aad6363c2a/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.485880 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-st6w9_91877573-8199-4055-988f-96bd6469af4f/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.558751 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-hdh6j_83cc207a-0725-4775-b2f7-93c71985ba1e/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.604575 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-hdh6j_83cc207a-0725-4775-b2f7-93c71985ba1e/manager/1.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.674771 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-gj5xk_8df02427-4d10-41bb-9798-82cf7b8bca3e/manager/1.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.761247 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-gj5xk_8df02427-4d10-41bb-9798-82cf7b8bca3e/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.929705 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-swt4w_0cd346d8-d14a-404e-b2fa-16fc917e6886/manager/0.log" Jan 05 21:52:04 crc kubenswrapper[4754]: I0105 21:52:04.979032 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-h56wh_6d71c5c9-f75a-475f-880c-d234d43ad7d9/manager/1.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.083751 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-h56wh_6d71c5c9-f75a-475f-880c-d234d43ad7d9/manager/0.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.142548 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-69j2s_983e4f4a-fe90-4460-ad97-b6955a888933/manager/1.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.201753 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-69j2s_983e4f4a-fe90-4460-ad97-b6955a888933/manager/0.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.259156 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd72b9pp_fed06176-d7ad-4373-84df-204b6fdbf5cf/manager/1.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.325985 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd72b9pp_fed06176-d7ad-4373-84df-204b6fdbf5cf/manager/0.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.540274 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74f9c55c9c-f9rnv_a078215d-9fb5-413f-b542-ca5b3c6fb296/manager/1.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.748384 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bb596d4b9-gbtbf_6836e11d-3e01-4752-ba84-0ba74829283f/operator/1.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.931385 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bb596d4b9-gbtbf_6836e11d-3e01-4752-ba84-0ba74829283f/operator/0.log" Jan 05 21:52:05 crc kubenswrapper[4754]: I0105 21:52:05.971756 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zv7sc_3f2e87f2-d218-4699-81bb-6156676884d3/registry-server/0.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.311184 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-jcntk_82f028d6-51a7-461a-ae7d-cd2da5f47afb/manager/1.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.318921 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-jngdn_db5f9ab8-2422-439c-a857-23f918cfa919/manager/0.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.533429 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vfsjd_29d3a96c-7dee-4a63-945c-3fef7cdcc7e7/operator/1.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.542102 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-jcntk_82f028d6-51a7-461a-ae7d-cd2da5f47afb/manager/0.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.623714 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vfsjd_29d3a96c-7dee-4a63-945c-3fef7cdcc7e7/operator/0.log" Jan 05 21:52:06 crc kubenswrapper[4754]: I0105 21:52:06.804170 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-hmwzd_a6907240-bf3c-4f9a-b7c8-4fbdf3174c8a/manager/0.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.056166 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-74f9c55c9c-f9rnv_a078215d-9fb5-413f-b542-ca5b3c6fb296/manager/0.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.101081 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-944qj_4d09717a-7822-46ae-8192-62aa7305304b/manager/1.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.149039 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-944qj_4d09717a-7822-46ae-8192-62aa7305304b/manager/0.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.223966 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-688488f44f-62gsk_4b33baa5-64bb-4df7-ac22-925d718f9d60/manager/0.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.285806 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-s4n44_77f4456d-e6a6-466a-a74c-5276e4951784/manager/1.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.310690 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-s4n44_77f4456d-e6a6-466a-a74c-5276e4951784/manager/0.log" Jan 05 21:52:07 crc kubenswrapper[4754]: I0105 21:52:07.588456 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:52:07 crc kubenswrapper[4754]: E0105 21:52:07.588992 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.003769 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:18 crc kubenswrapper[4754]: E0105 21:52:18.005037 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc85475-e970-4731-aa8f-4b9d735f06cc" containerName="container-00" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.005178 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc85475-e970-4731-aa8f-4b9d735f06cc" containerName="container-00" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.005815 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc85475-e970-4731-aa8f-4b9d735f06cc" containerName="container-00" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.009282 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.016335 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.047990 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.048054 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9z68\" (UniqueName: \"kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.048125 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.150950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.151013 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9z68\" (UniqueName: \"kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.151064 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.151736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.151985 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.177241 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9z68\" (UniqueName: \"kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68\") pod \"certified-operators-vtbs8\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.332014 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.862710 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:18 crc kubenswrapper[4754]: I0105 21:52:18.893105 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerStarted","Data":"32fcc1982a05c50e48abcde790eac618722f479c2362b7261e1d4259de9d4dce"} Jan 05 21:52:19 crc kubenswrapper[4754]: I0105 21:52:19.907028 4754 generic.go:334] "Generic (PLEG): container finished" podID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerID="bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426" exitCode=0 Jan 05 21:52:19 crc kubenswrapper[4754]: I0105 21:52:19.907128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerDied","Data":"bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426"} Jan 05 21:52:21 crc kubenswrapper[4754]: I0105 21:52:21.934955 4754 generic.go:334] "Generic (PLEG): container finished" podID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerID="49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2" exitCode=0 Jan 05 21:52:21 crc kubenswrapper[4754]: I0105 21:52:21.935080 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerDied","Data":"49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2"} Jan 05 21:52:22 crc kubenswrapper[4754]: I0105 21:52:22.589110 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:52:22 crc kubenswrapper[4754]: E0105 21:52:22.589764 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:52:22 crc kubenswrapper[4754]: I0105 21:52:22.950656 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerStarted","Data":"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391"} Jan 05 21:52:22 crc kubenswrapper[4754]: I0105 21:52:22.975008 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtbs8" podStartSLOduration=3.289098482 podStartE2EDuration="5.974982759s" podCreationTimestamp="2026-01-05 21:52:17 +0000 UTC" firstStartedPulling="2026-01-05 21:52:19.910136127 +0000 UTC m=+6426.619320001" lastFinishedPulling="2026-01-05 21:52:22.596020404 +0000 UTC m=+6429.305204278" observedRunningTime="2026-01-05 21:52:22.97172167 +0000 UTC m=+6429.680905554" watchObservedRunningTime="2026-01-05 21:52:22.974982759 +0000 UTC m=+6429.684166673" Jan 05 21:52:26 crc kubenswrapper[4754]: I0105 21:52:26.288412 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-v7gsb_3ff8a3aa-2a57-46f2-b0a9-bdc7d494e6e2/control-plane-machine-set-operator/0.log" Jan 05 21:52:26 crc kubenswrapper[4754]: I0105 21:52:26.827556 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xnn4d_e06065f7-b8b3-4c3e-820c-f0051f3a6f6d/machine-api-operator/0.log" Jan 05 21:52:26 crc kubenswrapper[4754]: I0105 21:52:26.841062 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xnn4d_e06065f7-b8b3-4c3e-820c-f0051f3a6f6d/kube-rbac-proxy/0.log" Jan 05 21:52:28 crc kubenswrapper[4754]: I0105 21:52:28.332749 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:28 crc kubenswrapper[4754]: I0105 21:52:28.333428 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:28 crc kubenswrapper[4754]: I0105 21:52:28.391046 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:29 crc kubenswrapper[4754]: I0105 21:52:29.089910 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:29 crc kubenswrapper[4754]: I0105 21:52:29.143059 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.052152 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtbs8" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="registry-server" containerID="cri-o://a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391" gracePeriod=2 Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.686932 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.795285 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content\") pod \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.795421 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities\") pod \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.795470 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9z68\" (UniqueName: \"kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68\") pod \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\" (UID: \"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066\") " Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.796421 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities" (OuterVolumeSpecName: "utilities") pod "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" (UID: "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.802798 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68" (OuterVolumeSpecName: "kube-api-access-r9z68") pod "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" (UID: "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066"). InnerVolumeSpecName "kube-api-access-r9z68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.840957 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" (UID: "8a0018fe-7cb6-4c1e-a6f4-079c2b81b066"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.897701 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.897730 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9z68\" (UniqueName: \"kubernetes.io/projected/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-kube-api-access-r9z68\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:31 crc kubenswrapper[4754]: I0105 21:52:31.897742 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.066392 4754 generic.go:334] "Generic (PLEG): container finished" podID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerID="a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391" exitCode=0 Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.066461 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerDied","Data":"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391"} Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.066503 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtbs8" event={"ID":"8a0018fe-7cb6-4c1e-a6f4-079c2b81b066","Type":"ContainerDied","Data":"32fcc1982a05c50e48abcde790eac618722f479c2362b7261e1d4259de9d4dce"} Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.066537 4754 scope.go:117] "RemoveContainer" containerID="a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.066761 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtbs8" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.114840 4754 scope.go:117] "RemoveContainer" containerID="49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.124447 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.139020 4754 scope.go:117] "RemoveContainer" containerID="bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.141939 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtbs8"] Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.207153 4754 scope.go:117] "RemoveContainer" containerID="a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391" Jan 05 21:52:32 crc kubenswrapper[4754]: E0105 21:52:32.208182 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391\": container with ID starting with a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391 not found: ID does not exist" containerID="a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.208250 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391"} err="failed to get container status \"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391\": rpc error: code = NotFound desc = could not find container \"a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391\": container with ID starting with a961f8d7aa30644fe83ecd52fd7ec5e6dad7c164e4ee0f1f6a737788fa952391 not found: ID does not exist" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.208326 4754 scope.go:117] "RemoveContainer" containerID="49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2" Jan 05 21:52:32 crc kubenswrapper[4754]: E0105 21:52:32.208645 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2\": container with ID starting with 49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2 not found: ID does not exist" containerID="49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.208688 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2"} err="failed to get container status \"49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2\": rpc error: code = NotFound desc = could not find container \"49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2\": container with ID starting with 49c442a6fee3d7ce484126157d9a1515f73bd8c468e5d92b9fbc0f2a0ff109d2 not found: ID does not exist" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.208715 4754 scope.go:117] "RemoveContainer" containerID="bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426" Jan 05 21:52:32 crc kubenswrapper[4754]: E0105 21:52:32.209057 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426\": container with ID starting with bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426 not found: ID does not exist" containerID="bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426" Jan 05 21:52:32 crc kubenswrapper[4754]: I0105 21:52:32.209107 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426"} err="failed to get container status \"bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426\": rpc error: code = NotFound desc = could not find container \"bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426\": container with ID starting with bf9ea8727c46d9b53d94388881157d24668fde23332772eaecce8f2326809426 not found: ID does not exist" Jan 05 21:52:33 crc kubenswrapper[4754]: I0105 21:52:33.615456 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" path="/var/lib/kubelet/pods/8a0018fe-7cb6-4c1e-a6f4-079c2b81b066/volumes" Jan 05 21:52:36 crc kubenswrapper[4754]: I0105 21:52:36.588693 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:52:36 crc kubenswrapper[4754]: E0105 21:52:36.589204 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:52:39 crc kubenswrapper[4754]: I0105 21:52:39.520615 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-nqnfz_52d348a3-de69-45cc-a625-18fdac495103/cert-manager-controller/0.log" Jan 05 21:52:39 crc kubenswrapper[4754]: I0105 21:52:39.718232 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jjd7h_ff71a71f-4340-4712-a3ec-606c3bc81013/cert-manager-cainjector/0.log" Jan 05 21:52:39 crc kubenswrapper[4754]: I0105 21:52:39.779504 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-k59mk_616c6f2a-f08e-450d-9ff1-cad7a75e25b2/cert-manager-webhook/0.log" Jan 05 21:52:50 crc kubenswrapper[4754]: I0105 21:52:50.588992 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:52:50 crc kubenswrapper[4754]: E0105 21:52:50.590051 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.074363 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-7kdcl_d032a820-661f-4cd1-840e-fd0603d8e1b7/nmstate-console-plugin/0.log" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.265900 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qcjf8_957087f5-55fd-4a40-a01c-f96bf31dacf8/nmstate-handler/0.log" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.305683 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hh5sg_7532804d-ccc6-4ba2-8803-ed9654864ad0/kube-rbac-proxy/0.log" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.391894 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-hh5sg_7532804d-ccc6-4ba2-8803-ed9654864ad0/nmstate-metrics/0.log" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.460989 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-xp7lv_656675a9-7aaf-4104-afb7-0221062cf486/nmstate-operator/0.log" Jan 05 21:52:53 crc kubenswrapper[4754]: I0105 21:52:53.563940 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-ht8zq_eb9a96b4-392b-43f5-ad59-4e7cd4171f33/nmstate-webhook/0.log" Jan 05 21:53:02 crc kubenswrapper[4754]: I0105 21:53:02.588505 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:53:02 crc kubenswrapper[4754]: E0105 21:53:02.589379 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:53:05 crc kubenswrapper[4754]: I0105 21:53:05.928526 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/kube-rbac-proxy/0.log" Jan 05 21:53:05 crc kubenswrapper[4754]: I0105 21:53:05.939041 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/manager/1.log" Jan 05 21:53:06 crc kubenswrapper[4754]: I0105 21:53:06.124021 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/manager/0.log" Jan 05 21:53:16 crc kubenswrapper[4754]: I0105 21:53:16.589373 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:53:16 crc kubenswrapper[4754]: E0105 21:53:16.590188 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.100607 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-hj2gf_e58a5f7b-2ae6-44ca-a299-b99e1dc283fe/cluster-logging-operator/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.274896 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-64f4t_c49d5d9c-150d-41cf-8fa3-a4484867b841/collector/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.334103 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8970d80f-9277-46ca-ba45-c09e3362c3e2/loki-compactor/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.488649 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-pd7zx_4ebdfefc-77a3-4dca-a664-5468209724ec/loki-distributor/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.531192 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-586cd7f6-c4rps_85a07def-c26c-49aa-ae32-c7772e9ebecc/gateway/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.600350 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-586cd7f6-c4rps_85a07def-c26c-49aa-ae32-c7772e9ebecc/opa/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.712771 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-586cd7f6-k6trg_cef8ee76-7c6e-420e-8c38-a7ad816cd513/gateway/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.717126 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-586cd7f6-k6trg_cef8ee76-7c6e-420e-8c38-a7ad816cd513/opa/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.850008 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_85b1b29c-72d4-41ff-8185-f1cd738be7db/loki-index-gateway/0.log" Jan 05 21:53:20 crc kubenswrapper[4754]: I0105 21:53:20.947782 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_7f8103c6-3d68-4568-ae3b-89f606aa116a/loki-ingester/0.log" Jan 05 21:53:21 crc kubenswrapper[4754]: I0105 21:53:21.113689 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-qfwrj_d8b747f5-71f0-48b5-aae8-375ef3d8ef00/loki-querier/0.log" Jan 05 21:53:21 crc kubenswrapper[4754]: I0105 21:53:21.127732 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-swdgc_dbe68fed-2285-4e97-9c3d-d9fb903dc682/loki-query-frontend/0.log" Jan 05 21:53:28 crc kubenswrapper[4754]: I0105 21:53:28.590414 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:53:28 crc kubenswrapper[4754]: E0105 21:53:28.591503 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.143972 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-b62cb_2f15c30e-3828-471c-8e71-3573735397a1/kube-rbac-proxy/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.352458 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-b62cb_2f15c30e-3828-471c-8e71-3573735397a1/controller/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.403806 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-frr-files/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.593427 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-metrics/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.622187 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-reloader/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.625382 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-frr-files/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.654131 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-reloader/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.871770 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-reloader/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.873926 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-frr-files/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.874274 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-metrics/0.log" Jan 05 21:53:35 crc kubenswrapper[4754]: I0105 21:53:35.877394 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-metrics/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.075411 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-frr-files/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.081775 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-metrics/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.097474 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/controller/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.120852 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/cp-reloader/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.294053 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/kube-rbac-proxy/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.297088 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/frr-metrics/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.387278 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/kube-rbac-proxy-frr/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.518396 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/reloader/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.631246 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-f7vfs_1b800ad7-0ece-4722-9ad1-e20a2b5c7d42/frr-k8s-webhook-server/0.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.856683 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79f77fb8f4-2q9lk_823e1e7d-9555-4324-a7aa-6add85d4d9f3/manager/1.log" Jan 05 21:53:36 crc kubenswrapper[4754]: I0105 21:53:36.861070 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79f77fb8f4-2q9lk_823e1e7d-9555-4324-a7aa-6add85d4d9f3/manager/0.log" Jan 05 21:53:37 crc kubenswrapper[4754]: I0105 21:53:37.109797 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75f4999fb9-2ss2h_cdb2b1f2-eb13-466c-bc69-5cb4307eb695/webhook-server/0.log" Jan 05 21:53:37 crc kubenswrapper[4754]: I0105 21:53:37.169952 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6whp_8118d4b3-34f3-49b4-ab29-1a2b17adacfb/kube-rbac-proxy/0.log" Jan 05 21:53:37 crc kubenswrapper[4754]: I0105 21:53:37.655082 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6whp_8118d4b3-34f3-49b4-ab29-1a2b17adacfb/speaker/1.log" Jan 05 21:53:38 crc kubenswrapper[4754]: I0105 21:53:38.038362 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6whp_8118d4b3-34f3-49b4-ab29-1a2b17adacfb/speaker/0.log" Jan 05 21:53:38 crc kubenswrapper[4754]: I0105 21:53:38.250861 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gkzx4_c584577b-8f80-4506-9fa5-3f8e9df40f02/frr/0.log" Jan 05 21:53:41 crc kubenswrapper[4754]: I0105 21:53:41.589534 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:53:41 crc kubenswrapper[4754]: E0105 21:53:41.590086 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.538280 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/util/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.692670 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/util/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.728164 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/pull/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.735573 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/pull/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.898781 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/pull/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.925411 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/extract/0.log" Jan 05 21:53:51 crc kubenswrapper[4754]: I0105 21:53:51.949537 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2x6gfw_025b0b79-3f3c-45ec-bfda-fc89123813af/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.061708 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.269023 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.270378 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.303804 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.442406 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.472602 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.475153 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkmlrf_e56f1313-f154-41eb-b7ce-95aba2d55d7d/extract/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.610905 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.771702 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.779728 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.802605 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.946592 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/util/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.962712 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/pull/0.log" Jan 05 21:53:52 crc kubenswrapper[4754]: I0105 21:53:52.989219 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4shxt7_d37d6c73-795d-4807-bbe3-ac09382c3f1c/extract/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.143554 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/util/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.293826 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/util/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.309792 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/pull/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.334689 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/pull/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.486384 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/extract/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.495903 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/pull/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.523308 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa84ztr5_cf605837-8d65-4b71-be63-362acdce07b5/util/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.599969 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:53:53 crc kubenswrapper[4754]: E0105 21:53:53.600354 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.669954 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/util/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.861339 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/util/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.886533 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/pull/0.log" Jan 05 21:53:53 crc kubenswrapper[4754]: I0105 21:53:53.890274 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/pull/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.068012 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/util/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.079744 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/pull/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.101738 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q46ng_73705d65-cc7d-4dd7-88ec-8d699ab40cfc/extract/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.252529 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-utilities/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.424377 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-utilities/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.457480 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-content/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.487563 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-content/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.627620 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-utilities/0.log" Jan 05 21:53:54 crc kubenswrapper[4754]: I0105 21:53:54.631639 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/extract-content/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.031611 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/registry-server/1.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.071739 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-utilities/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.293976 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-utilities/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.309455 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-content/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.357231 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-content/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.471851 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j8r5p_5f585df1-958f-4733-a720-2d37460d2b12/registry-server/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.523010 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-utilities/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.617247 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/extract-content/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.772901 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ggkzj_41d97351-8dc4-42de-bf00-4e8abbf24e0b/marketplace-operator/0.log" Jan 05 21:53:55 crc kubenswrapper[4754]: I0105 21:53:55.924248 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/registry-server/1.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:55.999984 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-utilities/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.189388 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-content/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.214524 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-utilities/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.292140 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-content/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.509449 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-n9kmf_187150cd-d7a9-4dfd-8151-0e6a88e82ddc/registry-server/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.524535 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-utilities/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.528302 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/extract-content/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.713516 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-utilities/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.719530 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tjtwh_3af58cc4-e753-4eb0-91c9-8b93516d665e/registry-server/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.850349 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-utilities/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.856256 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-content/0.log" Jan 05 21:53:56 crc kubenswrapper[4754]: I0105 21:53:56.867272 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-content/0.log" Jan 05 21:53:57 crc kubenswrapper[4754]: I0105 21:53:57.007116 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-utilities/0.log" Jan 05 21:53:57 crc kubenswrapper[4754]: I0105 21:53:57.041764 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/extract-content/0.log" Jan 05 21:53:57 crc kubenswrapper[4754]: I0105 21:53:57.761247 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kk2wq_dabb102c-20ff-4424-95d7-d26f22f594f5/registry-server/0.log" Jan 05 21:54:04 crc kubenswrapper[4754]: I0105 21:54:04.589802 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:54:04 crc kubenswrapper[4754]: E0105 21:54:04.591383 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:54:10 crc kubenswrapper[4754]: I0105 21:54:10.571374 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-2v5pd_d67e65cb-cb5a-4721-ba27-3b40ce273ee5/prometheus-operator/0.log" Jan 05 21:54:10 crc kubenswrapper[4754]: I0105 21:54:10.740884 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78b6f954c6-2m5zf_29717236-39b9-4568-8df9-028b84d46be8/prometheus-operator-admission-webhook/0.log" Jan 05 21:54:10 crc kubenswrapper[4754]: I0105 21:54:10.790857 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-78b6f954c6-42sdd_a56084cf-6295-48a4-baec-a0fe4f306658/prometheus-operator-admission-webhook/0.log" Jan 05 21:54:10 crc kubenswrapper[4754]: I0105 21:54:10.953056 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-27lzx_96d98210-f390-413e-8fe4-96ec610d2071/operator/0.log" Jan 05 21:54:11 crc kubenswrapper[4754]: I0105 21:54:11.030960 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gf6gw_845fcf94-25fb-4f7e-b4a1-d3afa86f2f6d/observability-ui-dashboards/0.log" Jan 05 21:54:11 crc kubenswrapper[4754]: I0105 21:54:11.142975 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9bgtx_f834cfd3-fa41-4790-92c1-3b80d98241af/perses-operator/0.log" Jan 05 21:54:19 crc kubenswrapper[4754]: I0105 21:54:19.589150 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:54:19 crc kubenswrapper[4754]: E0105 21:54:19.590136 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:54:24 crc kubenswrapper[4754]: I0105 21:54:24.732695 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/manager/0.log" Jan 05 21:54:24 crc kubenswrapper[4754]: I0105 21:54:24.788942 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/kube-rbac-proxy/0.log" Jan 05 21:54:24 crc kubenswrapper[4754]: I0105 21:54:24.806433 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c7d94bdc4-k9968_163085a0-0b43-4d21-aefc-ec28ba9c6e3f/manager/1.log" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.852247 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:54:30 crc kubenswrapper[4754]: E0105 21:54:30.853322 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="registry-server" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.853335 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="registry-server" Jan 05 21:54:30 crc kubenswrapper[4754]: E0105 21:54:30.853360 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="extract-utilities" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.853367 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="extract-utilities" Jan 05 21:54:30 crc kubenswrapper[4754]: E0105 21:54:30.853384 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="extract-content" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.853389 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="extract-content" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.853611 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0018fe-7cb6-4c1e-a6f4-079c2b81b066" containerName="registry-server" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.855248 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.889999 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.914040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.914240 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:30 crc kubenswrapper[4754]: I0105 21:54:30.914367 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8c7f\" (UniqueName: \"kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.016040 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8c7f\" (UniqueName: \"kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.016419 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.016628 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.016810 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.017051 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.035403 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8c7f\" (UniqueName: \"kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f\") pod \"redhat-operators-7f4vx\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.184387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:31 crc kubenswrapper[4754]: I0105 21:54:31.780872 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:54:32 crc kubenswrapper[4754]: I0105 21:54:32.398862 4754 generic.go:334] "Generic (PLEG): container finished" podID="4706d822-32f3-414d-b635-32cc96ecfde2" containerID="680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127" exitCode=0 Jan 05 21:54:32 crc kubenswrapper[4754]: I0105 21:54:32.399160 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerDied","Data":"680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127"} Jan 05 21:54:32 crc kubenswrapper[4754]: I0105 21:54:32.399187 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerStarted","Data":"a7080bd7e809416ea1a0f8b9b5cc485e9b4d2a9b16abd9b2cb3d88c8468e435d"} Jan 05 21:54:32 crc kubenswrapper[4754]: I0105 21:54:32.401174 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:54:33 crc kubenswrapper[4754]: I0105 21:54:33.596380 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:54:33 crc kubenswrapper[4754]: E0105 21:54:33.596964 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:54:34 crc kubenswrapper[4754]: I0105 21:54:34.418168 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerStarted","Data":"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854"} Jan 05 21:54:37 crc kubenswrapper[4754]: I0105 21:54:37.459703 4754 generic.go:334] "Generic (PLEG): container finished" podID="4706d822-32f3-414d-b635-32cc96ecfde2" containerID="fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854" exitCode=0 Jan 05 21:54:37 crc kubenswrapper[4754]: I0105 21:54:37.459799 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerDied","Data":"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854"} Jan 05 21:54:38 crc kubenswrapper[4754]: I0105 21:54:38.475759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerStarted","Data":"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3"} Jan 05 21:54:38 crc kubenswrapper[4754]: I0105 21:54:38.495652 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7f4vx" podStartSLOduration=3.003743518 podStartE2EDuration="8.49563378s" podCreationTimestamp="2026-01-05 21:54:30 +0000 UTC" firstStartedPulling="2026-01-05 21:54:32.400917293 +0000 UTC m=+6559.110101167" lastFinishedPulling="2026-01-05 21:54:37.892807555 +0000 UTC m=+6564.601991429" observedRunningTime="2026-01-05 21:54:38.492490985 +0000 UTC m=+6565.201674859" watchObservedRunningTime="2026-01-05 21:54:38.49563378 +0000 UTC m=+6565.204817654" Jan 05 21:54:41 crc kubenswrapper[4754]: I0105 21:54:41.184477 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:41 crc kubenswrapper[4754]: I0105 21:54:41.184962 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:54:42 crc kubenswrapper[4754]: I0105 21:54:42.242810 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7f4vx" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" probeResult="failure" output=< Jan 05 21:54:42 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:54:42 crc kubenswrapper[4754]: > Jan 05 21:54:46 crc kubenswrapper[4754]: I0105 21:54:46.590113 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:54:46 crc kubenswrapper[4754]: E0105 21:54:46.591231 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:54:52 crc kubenswrapper[4754]: I0105 21:54:52.452311 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7f4vx" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" probeResult="failure" output=< Jan 05 21:54:52 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Jan 05 21:54:52 crc kubenswrapper[4754]: > Jan 05 21:54:57 crc kubenswrapper[4754]: I0105 21:54:57.600568 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:54:57 crc kubenswrapper[4754]: E0105 21:54:57.602537 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:55:01 crc kubenswrapper[4754]: I0105 21:55:01.284913 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:55:01 crc kubenswrapper[4754]: I0105 21:55:01.347020 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:55:02 crc kubenswrapper[4754]: I0105 21:55:02.066917 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:55:02 crc kubenswrapper[4754]: I0105 21:55:02.750983 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7f4vx" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" containerID="cri-o://d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3" gracePeriod=2 Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.366699 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.519433 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities\") pod \"4706d822-32f3-414d-b635-32cc96ecfde2\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.519492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8c7f\" (UniqueName: \"kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f\") pod \"4706d822-32f3-414d-b635-32cc96ecfde2\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.519642 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content\") pod \"4706d822-32f3-414d-b635-32cc96ecfde2\" (UID: \"4706d822-32f3-414d-b635-32cc96ecfde2\") " Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.520661 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities" (OuterVolumeSpecName: "utilities") pod "4706d822-32f3-414d-b635-32cc96ecfde2" (UID: "4706d822-32f3-414d-b635-32cc96ecfde2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.526113 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f" (OuterVolumeSpecName: "kube-api-access-s8c7f") pod "4706d822-32f3-414d-b635-32cc96ecfde2" (UID: "4706d822-32f3-414d-b635-32cc96ecfde2"). InnerVolumeSpecName "kube-api-access-s8c7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.622190 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.622220 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8c7f\" (UniqueName: \"kubernetes.io/projected/4706d822-32f3-414d-b635-32cc96ecfde2-kube-api-access-s8c7f\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.635240 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4706d822-32f3-414d-b635-32cc96ecfde2" (UID: "4706d822-32f3-414d-b635-32cc96ecfde2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.724994 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4706d822-32f3-414d-b635-32cc96ecfde2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.777504 4754 generic.go:334] "Generic (PLEG): container finished" podID="4706d822-32f3-414d-b635-32cc96ecfde2" containerID="d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3" exitCode=0 Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.777586 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerDied","Data":"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3"} Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.777624 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7f4vx" event={"ID":"4706d822-32f3-414d-b635-32cc96ecfde2","Type":"ContainerDied","Data":"a7080bd7e809416ea1a0f8b9b5cc485e9b4d2a9b16abd9b2cb3d88c8468e435d"} Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.777647 4754 scope.go:117] "RemoveContainer" containerID="d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.778807 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7f4vx" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.827612 4754 scope.go:117] "RemoveContainer" containerID="fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.831930 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.847684 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7f4vx"] Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.858385 4754 scope.go:117] "RemoveContainer" containerID="680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.917362 4754 scope.go:117] "RemoveContainer" containerID="d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3" Jan 05 21:55:03 crc kubenswrapper[4754]: E0105 21:55:03.918180 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3\": container with ID starting with d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3 not found: ID does not exist" containerID="d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.918227 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3"} err="failed to get container status \"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3\": rpc error: code = NotFound desc = could not find container \"d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3\": container with ID starting with d88df40b4b49cc139a17d3b9552c2f68ad2386882cda4b387aae6e2b3720c0c3 not found: ID does not exist" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.918257 4754 scope.go:117] "RemoveContainer" containerID="fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854" Jan 05 21:55:03 crc kubenswrapper[4754]: E0105 21:55:03.918843 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854\": container with ID starting with fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854 not found: ID does not exist" containerID="fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.918881 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854"} err="failed to get container status \"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854\": rpc error: code = NotFound desc = could not find container \"fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854\": container with ID starting with fddcc6bf73995296a8cf732362ff316329f815697c183e867e44a9166b39f854 not found: ID does not exist" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.918910 4754 scope.go:117] "RemoveContainer" containerID="680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127" Jan 05 21:55:03 crc kubenswrapper[4754]: E0105 21:55:03.919242 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127\": container with ID starting with 680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127 not found: ID does not exist" containerID="680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127" Jan 05 21:55:03 crc kubenswrapper[4754]: I0105 21:55:03.919333 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127"} err="failed to get container status \"680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127\": rpc error: code = NotFound desc = could not find container \"680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127\": container with ID starting with 680a5117f259f7c2fdeb1d645c6d13f499228850616ccb95b17d926d5521a127 not found: ID does not exist" Jan 05 21:55:05 crc kubenswrapper[4754]: I0105 21:55:05.611708 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" path="/var/lib/kubelet/pods/4706d822-32f3-414d-b635-32cc96ecfde2/volumes" Jan 05 21:55:10 crc kubenswrapper[4754]: I0105 21:55:10.588431 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:55:10 crc kubenswrapper[4754]: E0105 21:55:10.589192 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:55:13 crc kubenswrapper[4754]: I0105 21:55:13.988734 4754 scope.go:117] "RemoveContainer" containerID="cd9534f752bbbae07d725b6747115d7b37d833dd4f4e99c23cf87b907153533c" Jan 05 21:55:14 crc kubenswrapper[4754]: I0105 21:55:14.029245 4754 scope.go:117] "RemoveContainer" containerID="bc291529efbb9669adbe748fdc8b5a2a179c67d1d918aa6b20d3f9976c6459f9" Jan 05 21:55:14 crc kubenswrapper[4754]: I0105 21:55:14.093890 4754 scope.go:117] "RemoveContainer" containerID="448d5ff6505169dd167207443719d4d8049fe080b056d3269d41305bfef08b31" Jan 05 21:55:22 crc kubenswrapper[4754]: I0105 21:55:22.589122 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:55:22 crc kubenswrapper[4754]: E0105 21:55:22.589983 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:55:34 crc kubenswrapper[4754]: I0105 21:55:34.589334 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:55:34 crc kubenswrapper[4754]: E0105 21:55:34.592352 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:55:45 crc kubenswrapper[4754]: I0105 21:55:45.589910 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:55:45 crc kubenswrapper[4754]: E0105 21:55:45.591225 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pkzls_openshift-machine-config-operator(1ce145f2-f010-4086-963c-23e68ff9e280)\"" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" Jan 05 21:56:00 crc kubenswrapper[4754]: I0105 21:56:00.589090 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:56:01 crc kubenswrapper[4754]: I0105 21:56:01.552565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"37045714e582d30c52968c80ecc482bb680663d10677f493a6227dd16ea8ea29"} Jan 05 21:56:14 crc kubenswrapper[4754]: I0105 21:56:14.212248 4754 scope.go:117] "RemoveContainer" containerID="87adc1c0e683eb91842260f1c3c4aa6a313498401408b8305b6ecdba5106295f" Jan 05 21:56:20 crc kubenswrapper[4754]: I0105 21:56:20.833649 4754 generic.go:334] "Generic (PLEG): container finished" podID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerID="a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9" exitCode=0 Jan 05 21:56:20 crc kubenswrapper[4754]: I0105 21:56:20.833778 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zklnc/must-gather-zbf9q" event={"ID":"a10baf0b-ad50-44a7-8a8c-f5efddadb26e","Type":"ContainerDied","Data":"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9"} Jan 05 21:56:20 crc kubenswrapper[4754]: I0105 21:56:20.835062 4754 scope.go:117] "RemoveContainer" containerID="a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9" Jan 05 21:56:21 crc kubenswrapper[4754]: I0105 21:56:21.822852 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zklnc_must-gather-zbf9q_a10baf0b-ad50-44a7-8a8c-f5efddadb26e/gather/0.log" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.307366 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zklnc/must-gather-zbf9q"] Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.308143 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zklnc/must-gather-zbf9q" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="copy" containerID="cri-o://2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721" gracePeriod=2 Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.326381 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zklnc/must-gather-zbf9q"] Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.864267 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zklnc_must-gather-zbf9q_a10baf0b-ad50-44a7-8a8c-f5efddadb26e/copy/0.log" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.865766 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.967728 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zklnc_must-gather-zbf9q_a10baf0b-ad50-44a7-8a8c-f5efddadb26e/copy/0.log" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.968109 4754 generic.go:334] "Generic (PLEG): container finished" podID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerID="2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721" exitCode=143 Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.968157 4754 scope.go:117] "RemoveContainer" containerID="2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.968173 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zklnc/must-gather-zbf9q" Jan 05 21:56:30 crc kubenswrapper[4754]: I0105 21:56:30.986241 4754 scope.go:117] "RemoveContainer" containerID="a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.021183 4754 scope.go:117] "RemoveContainer" containerID="2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721" Jan 05 21:56:31 crc kubenswrapper[4754]: E0105 21:56:31.021824 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721\": container with ID starting with 2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721 not found: ID does not exist" containerID="2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.021894 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721"} err="failed to get container status \"2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721\": rpc error: code = NotFound desc = could not find container \"2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721\": container with ID starting with 2df2485c9bff4641a1fbcc87a1a7ef597ac677cc776f7d85120f7e56deb9c721 not found: ID does not exist" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.021938 4754 scope.go:117] "RemoveContainer" containerID="a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9" Jan 05 21:56:31 crc kubenswrapper[4754]: E0105 21:56:31.022337 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9\": container with ID starting with a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9 not found: ID does not exist" containerID="a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.022384 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9"} err="failed to get container status \"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9\": rpc error: code = NotFound desc = could not find container \"a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9\": container with ID starting with a6f68353b6e39b3e4801268b600ec0dad7272a7aab020f4682fa939e97fe76f9 not found: ID does not exist" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.022545 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bskbt\" (UniqueName: \"kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt\") pod \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.022706 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output\") pod \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\" (UID: \"a10baf0b-ad50-44a7-8a8c-f5efddadb26e\") " Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.029416 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt" (OuterVolumeSpecName: "kube-api-access-bskbt") pod "a10baf0b-ad50-44a7-8a8c-f5efddadb26e" (UID: "a10baf0b-ad50-44a7-8a8c-f5efddadb26e"). InnerVolumeSpecName "kube-api-access-bskbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.126558 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bskbt\" (UniqueName: \"kubernetes.io/projected/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-kube-api-access-bskbt\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.199385 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a10baf0b-ad50-44a7-8a8c-f5efddadb26e" (UID: "a10baf0b-ad50-44a7-8a8c-f5efddadb26e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.229377 4754 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a10baf0b-ad50-44a7-8a8c-f5efddadb26e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 05 21:56:31 crc kubenswrapper[4754]: I0105 21:56:31.609455 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" path="/var/lib/kubelet/pods/a10baf0b-ad50-44a7-8a8c-f5efddadb26e/volumes" Jan 05 21:57:14 crc kubenswrapper[4754]: I0105 21:57:14.315017 4754 scope.go:117] "RemoveContainer" containerID="8b46fccf1c324559a3f26204cf769b6872c3c6dfe6edbab88b673b1d3d9c3eda" Jan 05 21:58:18 crc kubenswrapper[4754]: I0105 21:58:18.109580 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:58:18 crc kubenswrapper[4754]: I0105 21:58:18.110747 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:58:48 crc kubenswrapper[4754]: I0105 21:58:48.109649 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:58:48 crc kubenswrapper[4754]: I0105 21:58:48.110192 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:59:18 crc kubenswrapper[4754]: I0105 21:59:18.108884 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:59:18 crc kubenswrapper[4754]: I0105 21:59:18.109692 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:59:18 crc kubenswrapper[4754]: I0105 21:59:18.109761 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" Jan 05 21:59:18 crc kubenswrapper[4754]: I0105 21:59:18.111415 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37045714e582d30c52968c80ecc482bb680663d10677f493a6227dd16ea8ea29"} pod="openshift-machine-config-operator/machine-config-daemon-pkzls" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:59:18 crc kubenswrapper[4754]: I0105 21:59:18.111554 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" containerID="cri-o://37045714e582d30c52968c80ecc482bb680663d10677f493a6227dd16ea8ea29" gracePeriod=600 Jan 05 21:59:19 crc kubenswrapper[4754]: I0105 21:59:19.238191 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ce145f2-f010-4086-963c-23e68ff9e280" containerID="37045714e582d30c52968c80ecc482bb680663d10677f493a6227dd16ea8ea29" exitCode=0 Jan 05 21:59:19 crc kubenswrapper[4754]: I0105 21:59:19.238264 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerDied","Data":"37045714e582d30c52968c80ecc482bb680663d10677f493a6227dd16ea8ea29"} Jan 05 21:59:19 crc kubenswrapper[4754]: I0105 21:59:19.238810 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" event={"ID":"1ce145f2-f010-4086-963c-23e68ff9e280","Type":"ContainerStarted","Data":"5ba60a89b5e50aa5dc17187ad1556228b2a0604d7d13e0aeb4d269b2c213ec9b"} Jan 05 21:59:19 crc kubenswrapper[4754]: I0105 21:59:19.238852 4754 scope.go:117] "RemoveContainer" containerID="169d4061b649bc99aa659a822f718adfbc994f64468c16f7e58148dd700ac0f6" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.115701 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 21:59:47 crc kubenswrapper[4754]: E0105 21:59:47.117231 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="copy" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117255 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="copy" Jan 05 21:59:47 crc kubenswrapper[4754]: E0105 21:59:47.117352 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="extract-utilities" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117366 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="extract-utilities" Jan 05 21:59:47 crc kubenswrapper[4754]: E0105 21:59:47.117384 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="extract-content" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117395 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="extract-content" Jan 05 21:59:47 crc kubenswrapper[4754]: E0105 21:59:47.117436 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117447 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" Jan 05 21:59:47 crc kubenswrapper[4754]: E0105 21:59:47.117467 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="gather" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117478 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="gather" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117877 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4706d822-32f3-414d-b635-32cc96ecfde2" containerName="registry-server" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117905 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="gather" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.117951 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10baf0b-ad50-44a7-8a8c-f5efddadb26e" containerName="copy" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.120790 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.130153 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.305660 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wpk\" (UniqueName: \"kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.305732 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.305775 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.408595 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.408961 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wpk\" (UniqueName: \"kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.409042 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.409215 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.409585 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.436465 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wpk\" (UniqueName: \"kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk\") pod \"redhat-marketplace-82ljn\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.455906 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:47 crc kubenswrapper[4754]: I0105 21:59:47.942246 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 21:59:48 crc kubenswrapper[4754]: I0105 21:59:48.617636 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerID="dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e" exitCode=0 Jan 05 21:59:48 crc kubenswrapper[4754]: I0105 21:59:48.617690 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerDied","Data":"dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e"} Jan 05 21:59:48 crc kubenswrapper[4754]: I0105 21:59:48.617956 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerStarted","Data":"b80a7d45a159d44c2919cc08961d87c21bd267fdea7190ca1c93c43ae6d44252"} Jan 05 21:59:48 crc kubenswrapper[4754]: I0105 21:59:48.625061 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:59:49 crc kubenswrapper[4754]: I0105 21:59:49.644979 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerStarted","Data":"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034"} Jan 05 21:59:50 crc kubenswrapper[4754]: I0105 21:59:50.659891 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerID="e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034" exitCode=0 Jan 05 21:59:50 crc kubenswrapper[4754]: I0105 21:59:50.659960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerDied","Data":"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034"} Jan 05 21:59:51 crc kubenswrapper[4754]: I0105 21:59:51.762079 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerStarted","Data":"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f"} Jan 05 21:59:52 crc kubenswrapper[4754]: I0105 21:59:52.004864 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82ljn" podStartSLOduration=2.546252515 podStartE2EDuration="5.004845579s" podCreationTimestamp="2026-01-05 21:59:47 +0000 UTC" firstStartedPulling="2026-01-05 21:59:48.62479891 +0000 UTC m=+6875.333982794" lastFinishedPulling="2026-01-05 21:59:51.083391974 +0000 UTC m=+6877.792575858" observedRunningTime="2026-01-05 21:59:51.996909654 +0000 UTC m=+6878.706093538" watchObservedRunningTime="2026-01-05 21:59:52.004845579 +0000 UTC m=+6878.714029453" Jan 05 21:59:57 crc kubenswrapper[4754]: I0105 21:59:57.456557 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:57 crc kubenswrapper[4754]: I0105 21:59:57.457210 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:57 crc kubenswrapper[4754]: I0105 21:59:57.540902 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:57 crc kubenswrapper[4754]: I0105 21:59:57.915567 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 21:59:57 crc kubenswrapper[4754]: I0105 21:59:57.976876 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 21:59:59 crc kubenswrapper[4754]: I0105 21:59:59.872082 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-82ljn" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="registry-server" containerID="cri-o://1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f" gracePeriod=2 Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.169718 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8"] Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.171888 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.175871 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.176135 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.185380 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8"] Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.341753 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.342079 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.342116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvsp\" (UniqueName: \"kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.444267 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.444382 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.444425 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvsp\" (UniqueName: \"kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.445101 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.452357 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.461326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvsp\" (UniqueName: \"kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp\") pod \"collect-profiles-29460840-2t7m8\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.498353 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.609596 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.756216 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7wpk\" (UniqueName: \"kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk\") pod \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.756487 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content\") pod \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.756515 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities\") pod \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\" (UID: \"fcbc4997-31ea-4cd1-bf31-84a32330d5a9\") " Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.757619 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities" (OuterVolumeSpecName: "utilities") pod "fcbc4997-31ea-4cd1-bf31-84a32330d5a9" (UID: "fcbc4997-31ea-4cd1-bf31-84a32330d5a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.773073 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk" (OuterVolumeSpecName: "kube-api-access-f7wpk") pod "fcbc4997-31ea-4cd1-bf31-84a32330d5a9" (UID: "fcbc4997-31ea-4cd1-bf31-84a32330d5a9"). InnerVolumeSpecName "kube-api-access-f7wpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.781688 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcbc4997-31ea-4cd1-bf31-84a32330d5a9" (UID: "fcbc4997-31ea-4cd1-bf31-84a32330d5a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.860528 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7wpk\" (UniqueName: \"kubernetes.io/projected/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-kube-api-access-f7wpk\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.860564 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.860577 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcbc4997-31ea-4cd1-bf31-84a32330d5a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.890336 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerID="1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f" exitCode=0 Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.890408 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerDied","Data":"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f"} Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.890434 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82ljn" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.890856 4754 scope.go:117] "RemoveContainer" containerID="1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.894551 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82ljn" event={"ID":"fcbc4997-31ea-4cd1-bf31-84a32330d5a9","Type":"ContainerDied","Data":"b80a7d45a159d44c2919cc08961d87c21bd267fdea7190ca1c93c43ae6d44252"} Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.925517 4754 scope.go:117] "RemoveContainer" containerID="e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.931737 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.944012 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-82ljn"] Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.948435 4754 scope.go:117] "RemoveContainer" containerID="dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.988270 4754 scope.go:117] "RemoveContainer" containerID="1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f" Jan 05 22:00:00 crc kubenswrapper[4754]: E0105 22:00:00.991913 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f\": container with ID starting with 1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f not found: ID does not exist" containerID="1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.991956 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f"} err="failed to get container status \"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f\": rpc error: code = NotFound desc = could not find container \"1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f\": container with ID starting with 1681b1d697e6b2ae610ab1527ef2ed2e0628c2e8df6a0c2dfc6dcb793748786f not found: ID does not exist" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.991986 4754 scope.go:117] "RemoveContainer" containerID="e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034" Jan 05 22:00:00 crc kubenswrapper[4754]: E0105 22:00:00.992448 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034\": container with ID starting with e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034 not found: ID does not exist" containerID="e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.992476 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034"} err="failed to get container status \"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034\": rpc error: code = NotFound desc = could not find container \"e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034\": container with ID starting with e5be7fc10db57c2f32824e7b1b553dc7ea8268306b464f3b9b009e109c9c5034 not found: ID does not exist" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.992494 4754 scope.go:117] "RemoveContainer" containerID="dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e" Jan 05 22:00:00 crc kubenswrapper[4754]: E0105 22:00:00.992817 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e\": container with ID starting with dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e not found: ID does not exist" containerID="dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e" Jan 05 22:00:00 crc kubenswrapper[4754]: I0105 22:00:00.992839 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e"} err="failed to get container status \"dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e\": rpc error: code = NotFound desc = could not find container \"dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e\": container with ID starting with dad09a805224ba97e72dd9319ab9fa0b80b292618b8cd6c89eb557a649afc80e not found: ID does not exist" Jan 05 22:00:01 crc kubenswrapper[4754]: I0105 22:00:01.002498 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8"] Jan 05 22:00:01 crc kubenswrapper[4754]: I0105 22:00:01.613664 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" path="/var/lib/kubelet/pods/fcbc4997-31ea-4cd1-bf31-84a32330d5a9/volumes" Jan 05 22:00:01 crc kubenswrapper[4754]: I0105 22:00:01.909595 4754 generic.go:334] "Generic (PLEG): container finished" podID="fc5208e6-ea37-488a-a45c-595f0ad0f9dd" containerID="5fdcd39ac0cc31c1511118a3801818455d556355ad304f1d932ef3f0269427ee" exitCode=0 Jan 05 22:00:01 crc kubenswrapper[4754]: I0105 22:00:01.909733 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" event={"ID":"fc5208e6-ea37-488a-a45c-595f0ad0f9dd","Type":"ContainerDied","Data":"5fdcd39ac0cc31c1511118a3801818455d556355ad304f1d932ef3f0269427ee"} Jan 05 22:00:01 crc kubenswrapper[4754]: I0105 22:00:01.909830 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" event={"ID":"fc5208e6-ea37-488a-a45c-595f0ad0f9dd","Type":"ContainerStarted","Data":"e6d207942c61fdfa49dbf3fbf7dfd645a3e21d4c1afa37f3efd012d322f1a590"} Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.392584 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.436068 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvsp\" (UniqueName: \"kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp\") pod \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.436587 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume\") pod \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.437121 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume\") pod \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\" (UID: \"fc5208e6-ea37-488a-a45c-595f0ad0f9dd\") " Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.437477 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc5208e6-ea37-488a-a45c-595f0ad0f9dd" (UID: "fc5208e6-ea37-488a-a45c-595f0ad0f9dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.438673 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.445260 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc5208e6-ea37-488a-a45c-595f0ad0f9dd" (UID: "fc5208e6-ea37-488a-a45c-595f0ad0f9dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.445928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp" (OuterVolumeSpecName: "kube-api-access-qwvsp") pod "fc5208e6-ea37-488a-a45c-595f0ad0f9dd" (UID: "fc5208e6-ea37-488a-a45c-595f0ad0f9dd"). InnerVolumeSpecName "kube-api-access-qwvsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.540996 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.541044 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvsp\" (UniqueName: \"kubernetes.io/projected/fc5208e6-ea37-488a-a45c-595f0ad0f9dd-kube-api-access-qwvsp\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.933608 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" event={"ID":"fc5208e6-ea37-488a-a45c-595f0ad0f9dd","Type":"ContainerDied","Data":"e6d207942c61fdfa49dbf3fbf7dfd645a3e21d4c1afa37f3efd012d322f1a590"} Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.933958 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6d207942c61fdfa49dbf3fbf7dfd645a3e21d4c1afa37f3efd012d322f1a590" Jan 05 22:00:03 crc kubenswrapper[4754]: I0105 22:00:03.933669 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-2t7m8" Jan 05 22:00:04 crc kubenswrapper[4754]: I0105 22:00:04.486702 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2"] Jan 05 22:00:04 crc kubenswrapper[4754]: I0105 22:00:04.501327 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460795-hhdr2"] Jan 05 22:00:05 crc kubenswrapper[4754]: I0105 22:00:05.607075 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec25e4d-020b-4093-a073-e1b79a4a8434" path="/var/lib/kubelet/pods/7ec25e4d-020b-4093-a073-e1b79a4a8434/volumes" Jan 05 22:00:14 crc kubenswrapper[4754]: I0105 22:00:14.451544 4754 scope.go:117] "RemoveContainer" containerID="675a4d19bf04b7c968d78e2751b4261a8fb1750f74f75f43d6b79e483e7f7ab6" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.167081 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29460841-frcqg"] Jan 05 22:01:00 crc kubenswrapper[4754]: E0105 22:01:00.168517 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5208e6-ea37-488a-a45c-595f0ad0f9dd" containerName="collect-profiles" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168537 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5208e6-ea37-488a-a45c-595f0ad0f9dd" containerName="collect-profiles" Jan 05 22:01:00 crc kubenswrapper[4754]: E0105 22:01:00.168591 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168599 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[4754]: E0105 22:01:00.168616 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="extract-utilities" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168624 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="extract-utilities" Jan 05 22:01:00 crc kubenswrapper[4754]: E0105 22:01:00.168645 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="extract-content" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168652 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="extract-content" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168921 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5208e6-ea37-488a-a45c-595f0ad0f9dd" containerName="collect-profiles" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.168937 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcbc4997-31ea-4cd1-bf31-84a32330d5a9" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.169955 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.216342 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460841-frcqg"] Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.259540 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt28t\" (UniqueName: \"kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.259964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.260035 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.260256 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.362812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt28t\" (UniqueName: \"kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.363311 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.363362 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.363490 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.370060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.373553 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.381923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.395740 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt28t\" (UniqueName: \"kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t\") pod \"keystone-cron-29460841-frcqg\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:00 crc kubenswrapper[4754]: I0105 22:01:00.512710 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:01 crc kubenswrapper[4754]: I0105 22:01:01.014616 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460841-frcqg"] Jan 05 22:01:01 crc kubenswrapper[4754]: I0105 22:01:01.884187 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-frcqg" event={"ID":"d7efd126-c9dc-4d1e-aa30-ad502a7e310b","Type":"ContainerStarted","Data":"7a04b33aaa82ad3bcbf653dcd87fa387fd0fe748641bcbcb17afa04dfd27b642"} Jan 05 22:01:01 crc kubenswrapper[4754]: I0105 22:01:01.884805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-frcqg" event={"ID":"d7efd126-c9dc-4d1e-aa30-ad502a7e310b","Type":"ContainerStarted","Data":"f913687fc12d40165004dd0a187da406c327d926fd4da11a09ecacc23e14f591"} Jan 05 22:01:01 crc kubenswrapper[4754]: I0105 22:01:01.912841 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29460841-frcqg" podStartSLOduration=1.912814724 podStartE2EDuration="1.912814724s" podCreationTimestamp="2026-01-05 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:01:01.903590431 +0000 UTC m=+6948.612774365" watchObservedRunningTime="2026-01-05 22:01:01.912814724 +0000 UTC m=+6948.621998638" Jan 05 22:01:04 crc kubenswrapper[4754]: I0105 22:01:04.918099 4754 generic.go:334] "Generic (PLEG): container finished" podID="d7efd126-c9dc-4d1e-aa30-ad502a7e310b" containerID="7a04b33aaa82ad3bcbf653dcd87fa387fd0fe748641bcbcb17afa04dfd27b642" exitCode=0 Jan 05 22:01:04 crc kubenswrapper[4754]: I0105 22:01:04.918815 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-frcqg" event={"ID":"d7efd126-c9dc-4d1e-aa30-ad502a7e310b","Type":"ContainerDied","Data":"7a04b33aaa82ad3bcbf653dcd87fa387fd0fe748641bcbcb17afa04dfd27b642"} Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.411173 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.538089 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys\") pod \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.538182 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle\") pod \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.538278 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data\") pod \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.538363 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt28t\" (UniqueName: \"kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t\") pod \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\" (UID: \"d7efd126-c9dc-4d1e-aa30-ad502a7e310b\") " Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.544646 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t" (OuterVolumeSpecName: "kube-api-access-kt28t") pod "d7efd126-c9dc-4d1e-aa30-ad502a7e310b" (UID: "d7efd126-c9dc-4d1e-aa30-ad502a7e310b"). InnerVolumeSpecName "kube-api-access-kt28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.544653 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d7efd126-c9dc-4d1e-aa30-ad502a7e310b" (UID: "d7efd126-c9dc-4d1e-aa30-ad502a7e310b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.569616 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7efd126-c9dc-4d1e-aa30-ad502a7e310b" (UID: "d7efd126-c9dc-4d1e-aa30-ad502a7e310b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.600661 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data" (OuterVolumeSpecName: "config-data") pod "d7efd126-c9dc-4d1e-aa30-ad502a7e310b" (UID: "d7efd126-c9dc-4d1e-aa30-ad502a7e310b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.641962 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.641999 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.642012 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.642024 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt28t\" (UniqueName: \"kubernetes.io/projected/d7efd126-c9dc-4d1e-aa30-ad502a7e310b-kube-api-access-kt28t\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.946011 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-frcqg" event={"ID":"d7efd126-c9dc-4d1e-aa30-ad502a7e310b","Type":"ContainerDied","Data":"f913687fc12d40165004dd0a187da406c327d926fd4da11a09ecacc23e14f591"} Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.946368 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f913687fc12d40165004dd0a187da406c327d926fd4da11a09ecacc23e14f591" Jan 05 22:01:06 crc kubenswrapper[4754]: I0105 22:01:06.946068 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-frcqg" Jan 05 22:01:18 crc kubenswrapper[4754]: I0105 22:01:18.110405 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:01:18 crc kubenswrapper[4754]: I0105 22:01:18.111117 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:01:48 crc kubenswrapper[4754]: I0105 22:01:48.109348 4754 patch_prober.go:28] interesting pod/machine-config-daemon-pkzls container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:01:48 crc kubenswrapper[4754]: I0105 22:01:48.109933 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pkzls" podUID="1ce145f2-f010-4086-963c-23e68ff9e280" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"